Test Report: Docker_Linux_containerd_arm64 19649

                    
                      32fce3c1cb58db02ee1cd4b36165a584c8a30f83:2024-09-16:36244
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 199.89
302 TestStartStop/group/old-k8s-version/serial/SecondStart 382.11
x
+
TestAddons/serial/Volcano (199.89s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 39.527842ms
addons_test.go:905: volcano-admission stabilized in 40.551706ms
addons_test.go:897: volcano-scheduler stabilized in 41.359057ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-jqfsm" [1c814efa-cb87-4548-a04e-fd6e32bdb2df] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003954232s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-5mhdx" [7e165d53-f5e3-40ea-a63e-aacd63d6370d] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003761029s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-fpbw7" [b8464231-9873-4d7e-bd8b-f22961f2269f] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003255846s
addons_test.go:932: (dbg) Run:  kubectl --context addons-350900 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-350900 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-350900 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [7ba2f4d6-6dcc-4ab9-9592-6464326d554f] Pending
helpers_test.go:344: "test-job-nginx-0" [7ba2f4d6-6dcc-4ab9-9592-6464326d554f] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-350900 -n addons-350900
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-16 19:19:35.411659682 +0000 UTC m=+487.349893468
addons_test.go:964: (dbg) Run:  kubectl --context addons-350900 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-350900 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-0f2fe1ae-ee97-49c6-aff8-9aa02c01f8c3
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dg6qp (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-dg6qp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-350900 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-350900 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-350900
helpers_test.go:235: (dbg) docker inspect addons-350900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "15723d309f5ac00c18b4593362f7403433f06dbe7b7c4ab331b3212771319a71",
	        "Created": "2024-09-16T19:12:16.894645878Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 722692,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T19:12:17.036172589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:735d22f77ce2bf9e02c77058920b4d1610fffc1af6c5e42bd1f17e7556552aac",
	        "ResolvConfPath": "/var/lib/docker/containers/15723d309f5ac00c18b4593362f7403433f06dbe7b7c4ab331b3212771319a71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/15723d309f5ac00c18b4593362f7403433f06dbe7b7c4ab331b3212771319a71/hostname",
	        "HostsPath": "/var/lib/docker/containers/15723d309f5ac00c18b4593362f7403433f06dbe7b7c4ab331b3212771319a71/hosts",
	        "LogPath": "/var/lib/docker/containers/15723d309f5ac00c18b4593362f7403433f06dbe7b7c4ab331b3212771319a71/15723d309f5ac00c18b4593362f7403433f06dbe7b7c4ab331b3212771319a71-json.log",
	        "Name": "/addons-350900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-350900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-350900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5c5427e7203a02c1b56479bd19aba37255b71cb7c4f891059b7f3e01586046d2-init/diff:/var/lib/docker/overlay2/0f997814f4acb2707641eca22120a369f13df677c67e30cebac9ef1a05c579dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5c5427e7203a02c1b56479bd19aba37255b71cb7c4f891059b7f3e01586046d2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5c5427e7203a02c1b56479bd19aba37255b71cb7c4f891059b7f3e01586046d2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5c5427e7203a02c1b56479bd19aba37255b71cb7c4f891059b7f3e01586046d2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-350900",
	                "Source": "/var/lib/docker/volumes/addons-350900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-350900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-350900",
	                "name.minikube.sigs.k8s.io": "addons-350900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79c3342625bd7dc6b6a45d305d8b38e151f93bca7e4922b2358cb996039b10de",
	            "SandboxKey": "/var/run/docker/netns/79c3342625bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-350900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ed7c945a3008082df7fe4bd43e5dfb5d7ae48279bc14576d283dc4e599f81c61",
	                    "EndpointID": "ba6e9575d281d90e2dab17002ad3b93763d99750345cfba9463d0b7e5b33c343",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-350900",
	                        "15723d309f5a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-350900 -n addons-350900
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-350900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-350900 logs -n 25: (1.617213528s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-655237   | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC |                     |
	|         | -p download-only-655237              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC | 16 Sep 24 19:11 UTC |
	| delete  | -p download-only-655237              | download-only-655237   | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC | 16 Sep 24 19:11 UTC |
	| start   | -o=json --download-only              | download-only-558265   | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC |                     |
	|         | -p download-only-558265              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC | 16 Sep 24 19:11 UTC |
	| delete  | -p download-only-558265              | download-only-558265   | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC | 16 Sep 24 19:11 UTC |
	| delete  | -p download-only-655237              | download-only-655237   | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC | 16 Sep 24 19:11 UTC |
	| delete  | -p download-only-558265              | download-only-558265   | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC | 16 Sep 24 19:11 UTC |
	| start   | --download-only -p                   | download-docker-112124 | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC |                     |
	|         | download-docker-112124               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-112124            | download-docker-112124 | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC | 16 Sep 24 19:11 UTC |
	| start   | --download-only -p                   | binary-mirror-811581   | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC |                     |
	|         | binary-mirror-811581                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44327               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-811581              | binary-mirror-811581   | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC | 16 Sep 24 19:11 UTC |
	| addons  | enable dashboard -p                  | addons-350900          | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC |                     |
	|         | addons-350900                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-350900          | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC |                     |
	|         | addons-350900                        |                        |         |         |                     |                     |
	| start   | -p addons-350900 --wait=true         | addons-350900          | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC | 16 Sep 24 19:16 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 19:11:52
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 19:11:52.196660  722185 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:11:52.196855  722185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:11:52.196888  722185 out.go:358] Setting ErrFile to fd 2...
	I0916 19:11:52.196909  722185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:11:52.197176  722185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
	I0916 19:11:52.197667  722185 out.go:352] Setting JSON to false
	I0916 19:11:52.198615  722185 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10426,"bootTime":1726503487,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0916 19:11:52.198729  722185 start.go:139] virtualization:  
	I0916 19:11:52.200828  722185 out.go:177] * [addons-350900] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0916 19:11:52.202223  722185 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 19:11:52.202304  722185 notify.go:220] Checking for updates...
	I0916 19:11:52.204468  722185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 19:11:52.206118  722185 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	I0916 19:11:52.207481  722185 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	I0916 19:11:52.208587  722185 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0916 19:11:52.209811  722185 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 19:11:52.211288  722185 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 19:11:52.233660  722185 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 19:11:52.233791  722185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:11:52.294421  722185 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-16 19:11:52.284407518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:11:52.294533  722185 docker.go:318] overlay module found
	I0916 19:11:52.296134  722185 out.go:177] * Using the docker driver based on user configuration
	I0916 19:11:52.297287  722185 start.go:297] selected driver: docker
	I0916 19:11:52.297303  722185 start.go:901] validating driver "docker" against <nil>
	I0916 19:11:52.297317  722185 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 19:11:52.297973  722185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:11:52.353140  722185 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-16 19:11:52.343798732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:11:52.353357  722185 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 19:11:52.353588  722185 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 19:11:52.355195  722185 out.go:177] * Using Docker driver with root privileges
	I0916 19:11:52.356463  722185 cni.go:84] Creating CNI manager for ""
	I0916 19:11:52.356549  722185 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 19:11:52.356564  722185 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 19:11:52.356658  722185 start.go:340] cluster config:
	{Name:addons-350900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-350900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:11:52.358046  722185 out.go:177] * Starting "addons-350900" primary control-plane node in "addons-350900" cluster
	I0916 19:11:52.359372  722185 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 19:11:52.360887  722185 out.go:177] * Pulling base image v0.0.45-1726481311-19649 ...
	I0916 19:11:52.362193  722185 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 19:11:52.362258  722185 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-716050/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0916 19:11:52.362272  722185 cache.go:56] Caching tarball of preloaded images
	I0916 19:11:52.362275  722185 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local docker daemon
	I0916 19:11:52.362365  722185 preload.go:172] Found /home/jenkins/minikube-integration/19649-716050/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 19:11:52.362376  722185 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 19:11:52.362735  722185 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/config.json ...
	I0916 19:11:52.362766  722185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/config.json: {Name:mk6258dd8595dacc5d853eeb6aa6f61a9fabcf4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:11:52.376390  722185 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc to local cache
	I0916 19:11:52.376512  722185 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory
	I0916 19:11:52.376538  722185 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory, skipping pull
	I0916 19:11:52.376548  722185 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc exists in cache, skipping pull
	I0916 19:11:52.376556  722185 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc as a tarball
	I0916 19:11:52.376566  722185 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc from local cache
	I0916 19:12:09.631301  722185 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc from cached tarball
	I0916 19:12:09.631355  722185 cache.go:194] Successfully downloaded all kic artifacts
	I0916 19:12:09.631386  722185 start.go:360] acquireMachinesLock for addons-350900: {Name:mkcc7a0fba58d707bc22d9946cec7ac71eb7dd17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 19:12:09.631512  722185 start.go:364] duration metric: took 93.923µs to acquireMachinesLock for "addons-350900"
	I0916 19:12:09.631545  722185 start.go:93] Provisioning new machine with config: &{Name:addons-350900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-350900 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 19:12:09.631640  722185 start.go:125] createHost starting for "" (driver="docker")
	I0916 19:12:09.633100  722185 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0916 19:12:09.633342  722185 start.go:159] libmachine.API.Create for "addons-350900" (driver="docker")
	I0916 19:12:09.633379  722185 client.go:168] LocalClient.Create starting
	I0916 19:12:09.633486  722185 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem
	I0916 19:12:09.854660  722185 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/cert.pem
	I0916 19:12:10.492788  722185 cli_runner.go:164] Run: docker network inspect addons-350900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 19:12:10.510326  722185 cli_runner.go:211] docker network inspect addons-350900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 19:12:10.510426  722185 network_create.go:284] running [docker network inspect addons-350900] to gather additional debugging logs...
	I0916 19:12:10.510452  722185 cli_runner.go:164] Run: docker network inspect addons-350900
	W0916 19:12:10.527102  722185 cli_runner.go:211] docker network inspect addons-350900 returned with exit code 1
	I0916 19:12:10.527133  722185 network_create.go:287] error running [docker network inspect addons-350900]: docker network inspect addons-350900: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-350900 not found
	I0916 19:12:10.527176  722185 network_create.go:289] output of [docker network inspect addons-350900]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-350900 not found
	
	** /stderr **
	I0916 19:12:10.527386  722185 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 19:12:10.543891  722185 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017cbbd0}
	I0916 19:12:10.543939  722185 network_create.go:124] attempt to create docker network addons-350900 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 19:12:10.544005  722185 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-350900 addons-350900
	I0916 19:12:10.647861  722185 network_create.go:108] docker network addons-350900 192.168.49.0/24 created
	I0916 19:12:10.647899  722185 kic.go:121] calculated static IP "192.168.49.2" for the "addons-350900" container
	I0916 19:12:10.647976  722185 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 19:12:10.662655  722185 cli_runner.go:164] Run: docker volume create addons-350900 --label name.minikube.sigs.k8s.io=addons-350900 --label created_by.minikube.sigs.k8s.io=true
	I0916 19:12:10.681205  722185 oci.go:103] Successfully created a docker volume addons-350900
	I0916 19:12:10.681324  722185 cli_runner.go:164] Run: docker run --rm --name addons-350900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-350900 --entrypoint /usr/bin/test -v addons-350900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -d /var/lib
	I0916 19:12:12.758669  722185 cli_runner.go:217] Completed: docker run --rm --name addons-350900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-350900 --entrypoint /usr/bin/test -v addons-350900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -d /var/lib: (2.07729592s)
	I0916 19:12:12.758701  722185 oci.go:107] Successfully prepared a docker volume addons-350900
	I0916 19:12:12.758729  722185 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 19:12:12.758751  722185 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 19:12:12.758839  722185 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19649-716050/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-350900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 19:12:16.822456  722185 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19649-716050/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-350900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -I lz4 -xf /preloaded.tar -C /extractDir: (4.063575932s)
	I0916 19:12:16.822487  722185 kic.go:203] duration metric: took 4.063733222s to extract preloaded images to volume ...
	W0916 19:12:16.822629  722185 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 19:12:16.822748  722185 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 19:12:16.879481  722185 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-350900 --name addons-350900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-350900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-350900 --network addons-350900 --ip 192.168.49.2 --volume addons-350900:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc
	I0916 19:12:17.214876  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Running}}
	I0916 19:12:17.235175  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:17.260373  722185 cli_runner.go:164] Run: docker exec addons-350900 stat /var/lib/dpkg/alternatives/iptables
	I0916 19:12:17.351762  722185 oci.go:144] the created container "addons-350900" has a running status.
	I0916 19:12:17.351792  722185 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa...
	I0916 19:12:17.633000  722185 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 19:12:17.658723  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:17.677830  722185 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 19:12:17.677851  722185 kic_runner.go:114] Args: [docker exec --privileged addons-350900 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 19:12:17.759002  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:17.787536  722185 machine.go:93] provisionDockerMachine start ...
	I0916 19:12:17.787627  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:17.809001  722185 main.go:141] libmachine: Using SSH client type: native
	I0916 19:12:17.809282  722185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I0916 19:12:17.809298  722185 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 19:12:17.810163  722185 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60864->127.0.0.1:33530: read: connection reset by peer
	I0916 19:12:20.950915  722185 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-350900
	
	I0916 19:12:20.950941  722185 ubuntu.go:169] provisioning hostname "addons-350900"
	I0916 19:12:20.951006  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:20.968035  722185 main.go:141] libmachine: Using SSH client type: native
	I0916 19:12:20.968281  722185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I0916 19:12:20.968298  722185 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-350900 && echo "addons-350900" | sudo tee /etc/hostname
	I0916 19:12:21.118742  722185 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-350900
	
	I0916 19:12:21.118831  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:21.136665  722185 main.go:141] libmachine: Using SSH client type: native
	I0916 19:12:21.136916  722185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I0916 19:12:21.136938  722185 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-350900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-350900/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-350900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 19:12:21.275369  722185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 19:12:21.275395  722185 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19649-716050/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-716050/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-716050/.minikube}
	I0916 19:12:21.275453  722185 ubuntu.go:177] setting up certificates
	I0916 19:12:21.275469  722185 provision.go:84] configureAuth start
	I0916 19:12:21.275544  722185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-350900
	I0916 19:12:21.291548  722185 provision.go:143] copyHostCerts
	I0916 19:12:21.291634  722185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-716050/.minikube/ca.pem (1082 bytes)
	I0916 19:12:21.291772  722185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-716050/.minikube/cert.pem (1123 bytes)
	I0916 19:12:21.291839  722185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-716050/.minikube/key.pem (1675 bytes)
	I0916 19:12:21.291902  722185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-716050/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca-key.pem org=jenkins.addons-350900 san=[127.0.0.1 192.168.49.2 addons-350900 localhost minikube]
	I0916 19:12:21.650888  722185 provision.go:177] copyRemoteCerts
	I0916 19:12:21.650959  722185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 19:12:21.651005  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:21.667520  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:21.768261  722185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 19:12:21.792716  722185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 19:12:21.816781  722185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 19:12:21.840340  722185 provision.go:87] duration metric: took 564.85548ms to configureAuth
	I0916 19:12:21.840423  722185 ubuntu.go:193] setting minikube options for container-runtime
	I0916 19:12:21.840658  722185 config.go:182] Loaded profile config "addons-350900": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 19:12:21.840676  722185 machine.go:96] duration metric: took 4.053122045s to provisionDockerMachine
	I0916 19:12:21.840685  722185 client.go:171] duration metric: took 12.207294092s to LocalClient.Create
	I0916 19:12:21.840705  722185 start.go:167] duration metric: took 12.207364007s to libmachine.API.Create "addons-350900"
	I0916 19:12:21.840715  722185 start.go:293] postStartSetup for "addons-350900" (driver="docker")
	I0916 19:12:21.840724  722185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 19:12:21.840784  722185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 19:12:21.840832  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:21.858428  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:21.956557  722185 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 19:12:21.959750  722185 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 19:12:21.959789  722185 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 19:12:21.959801  722185 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 19:12:21.959808  722185 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 19:12:21.959819  722185 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-716050/.minikube/addons for local assets ...
	I0916 19:12:21.959896  722185 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-716050/.minikube/files for local assets ...
	I0916 19:12:21.959923  722185 start.go:296] duration metric: took 119.201568ms for postStartSetup
	I0916 19:12:21.960251  722185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-350900
	I0916 19:12:21.976549  722185 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/config.json ...
	I0916 19:12:21.976859  722185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 19:12:21.976913  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:21.994054  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:22.088117  722185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 19:12:22.092831  722185 start.go:128] duration metric: took 12.46117482s to createHost
	I0916 19:12:22.092862  722185 start.go:83] releasing machines lock for "addons-350900", held for 12.461328163s
	I0916 19:12:22.092938  722185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-350900
	I0916 19:12:22.109727  722185 ssh_runner.go:195] Run: cat /version.json
	I0916 19:12:22.109790  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:22.110068  722185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 19:12:22.110153  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:22.127534  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:22.138308  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:22.349195  722185 ssh_runner.go:195] Run: systemctl --version
	I0916 19:12:22.353616  722185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 19:12:22.357928  722185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 19:12:22.383575  722185 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 19:12:22.383694  722185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 19:12:22.414280  722185 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 19:12:22.414353  722185 start.go:495] detecting cgroup driver to use...
	I0916 19:12:22.414411  722185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 19:12:22.414493  722185 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 19:12:22.427201  722185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 19:12:22.439112  722185 docker.go:217] disabling cri-docker service (if available) ...
	I0916 19:12:22.439218  722185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 19:12:22.453598  722185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 19:12:22.468554  722185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 19:12:22.555545  722185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 19:12:22.652921  722185 docker.go:233] disabling docker service ...
	I0916 19:12:22.653020  722185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 19:12:22.673393  722185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 19:12:22.685603  722185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 19:12:22.768226  722185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 19:12:22.854650  722185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 19:12:22.866286  722185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 19:12:22.882670  722185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 19:12:22.892453  722185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 19:12:22.902164  722185 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 19:12:22.902254  722185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 19:12:22.912206  722185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 19:12:22.922538  722185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 19:12:22.932605  722185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 19:12:22.942602  722185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 19:12:22.952830  722185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 19:12:22.963560  722185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 19:12:22.973862  722185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 19:12:22.985074  722185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 19:12:22.994360  722185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 19:12:23.016370  722185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:12:23.096860  722185 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 19:12:23.235521  722185 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 19:12:23.235677  722185 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 19:12:23.239472  722185 start.go:563] Will wait 60s for crictl version
	I0916 19:12:23.239589  722185 ssh_runner.go:195] Run: which crictl
	I0916 19:12:23.243203  722185 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 19:12:23.278838  722185 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 19:12:23.278991  722185 ssh_runner.go:195] Run: containerd --version
	I0916 19:12:23.302274  722185 ssh_runner.go:195] Run: containerd --version
	I0916 19:12:23.328951  722185 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 19:12:23.331761  722185 cli_runner.go:164] Run: docker network inspect addons-350900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 19:12:23.347410  722185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 19:12:23.351081  722185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 19:12:23.361816  722185 kubeadm.go:883] updating cluster {Name:addons-350900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-350900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 19:12:23.361948  722185 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 19:12:23.362023  722185 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 19:12:23.398019  722185 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 19:12:23.398043  722185 containerd.go:534] Images already preloaded, skipping extraction
	I0916 19:12:23.398112  722185 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 19:12:23.433428  722185 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 19:12:23.433452  722185 cache_images.go:84] Images are preloaded, skipping loading
	I0916 19:12:23.433461  722185 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0916 19:12:23.433592  722185 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-350900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-350900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 19:12:23.433680  722185 ssh_runner.go:195] Run: sudo crictl info
	I0916 19:12:23.472375  722185 cni.go:84] Creating CNI manager for ""
	I0916 19:12:23.472402  722185 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 19:12:23.472412  722185 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 19:12:23.472434  722185 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-350900 NodeName:addons-350900 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 19:12:23.472578  722185 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-350900"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 19:12:23.472650  722185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 19:12:23.481499  722185 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 19:12:23.481572  722185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 19:12:23.490212  722185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 19:12:23.508119  722185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 19:12:23.526521  722185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0916 19:12:23.544547  722185 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 19:12:23.548090  722185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 19:12:23.559006  722185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:12:23.641770  722185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 19:12:23.657256  722185 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900 for IP: 192.168.49.2
	I0916 19:12:23.657288  722185 certs.go:194] generating shared ca certs ...
	I0916 19:12:23.657307  722185 certs.go:226] acquiring lock for ca certs: {Name:mk293c0d980623a78c1c8e4e7829d120cb991002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:12:23.657459  722185 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-716050/.minikube/ca.key
	I0916 19:12:24.414321  722185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-716050/.minikube/ca.crt ...
	I0916 19:12:24.414357  722185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/ca.crt: {Name:mkc2d73fa98cf3a4d32a8881188b66248a9378b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:12:24.414563  722185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-716050/.minikube/ca.key ...
	I0916 19:12:24.414576  722185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/ca.key: {Name:mk413d50a2f1ad2b1cf68b251136c13316e4c729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:12:24.414668  722185 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-716050/.minikube/proxy-client-ca.key
	I0916 19:12:24.665274  722185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-716050/.minikube/proxy-client-ca.crt ...
	I0916 19:12:24.665305  722185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/proxy-client-ca.crt: {Name:mk8a7d928233818982d8e581d86d8fcda32775a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:12:24.665490  722185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-716050/.minikube/proxy-client-ca.key ...
	I0916 19:12:24.665503  722185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/proxy-client-ca.key: {Name:mk99c18be5996c2e385469ebaca2ce9a2ac15600 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:12:24.665594  722185 certs.go:256] generating profile certs ...
	I0916 19:12:24.665657  722185 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.key
	I0916 19:12:24.665686  722185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt with IP's: []
	I0916 19:12:25.525528  722185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt ...
	I0916 19:12:25.525565  722185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: {Name:mk9414750b0b6db8841430ce651a1ccffb83d94a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:12:25.525756  722185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.key ...
	I0916 19:12:25.525770  722185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.key: {Name:mk808f92cf809f914da45a17bb2ac02cf9fe0af6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:12:25.525846  722185 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/apiserver.key.d584c5bb
	I0916 19:12:25.525865  722185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/apiserver.crt.d584c5bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 19:12:26.560817  722185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/apiserver.crt.d584c5bb ...
	I0916 19:12:26.560854  722185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/apiserver.crt.d584c5bb: {Name:mk00eb6670391efa23272abd893c7314d6b5f37d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:12:26.561050  722185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/apiserver.key.d584c5bb ...
	I0916 19:12:26.561068  722185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/apiserver.key.d584c5bb: {Name:mk2a280df34a118e651f9c25e2a98bafe8399209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:12:26.561159  722185 certs.go:381] copying /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/apiserver.crt.d584c5bb -> /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/apiserver.crt
	I0916 19:12:26.561237  722185 certs.go:385] copying /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/apiserver.key.d584c5bb -> /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/apiserver.key
	I0916 19:12:26.561296  722185 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/proxy-client.key
	I0916 19:12:26.561317  722185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/proxy-client.crt with IP's: []
	I0916 19:12:27.102307  722185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/proxy-client.crt ...
	I0916 19:12:27.102342  722185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/proxy-client.crt: {Name:mk22a479c45edc179d04d95c7a7cca0e70afea58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:12:27.102542  722185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/proxy-client.key ...
	I0916 19:12:27.102557  722185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/proxy-client.key: {Name:mk9636799bdb295f08cf2e6d988696e5f8bd5bdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:12:27.103227  722185 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 19:12:27.103272  722185 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem (1082 bytes)
	I0916 19:12:27.103296  722185 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/cert.pem (1123 bytes)
	I0916 19:12:27.103354  722185 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/key.pem (1675 bytes)
	I0916 19:12:27.103951  722185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 19:12:27.129091  722185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 19:12:27.154463  722185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 19:12:27.179189  722185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 19:12:27.204344  722185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 19:12:27.228324  722185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 19:12:27.251997  722185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 19:12:27.276403  722185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 19:12:27.301012  722185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 19:12:27.325292  722185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 19:12:27.344040  722185 ssh_runner.go:195] Run: openssl version
	I0916 19:12:27.349730  722185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 19:12:27.359426  722185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:12:27.363078  722185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 19:12 /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:12:27.363177  722185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:12:27.370388  722185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 19:12:27.379845  722185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 19:12:27.383140  722185 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 19:12:27.383189  722185 kubeadm.go:392] StartCluster: {Name:addons-350900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-350900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:12:27.383274  722185 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 19:12:27.383359  722185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 19:12:27.420625  722185 cri.go:89] found id: ""
	I0916 19:12:27.420756  722185 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 19:12:27.429809  722185 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 19:12:27.438513  722185 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 19:12:27.438582  722185 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 19:12:27.447219  722185 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 19:12:27.447281  722185 kubeadm.go:157] found existing configuration files:
	
	I0916 19:12:27.447410  722185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 19:12:27.456375  722185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 19:12:27.456451  722185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 19:12:27.464928  722185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 19:12:27.473770  722185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 19:12:27.473855  722185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 19:12:27.482599  722185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 19:12:27.491370  722185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 19:12:27.491432  722185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 19:12:27.499903  722185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 19:12:27.509095  722185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 19:12:27.509159  722185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 19:12:27.517932  722185 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 19:12:27.571988  722185 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 19:12:27.572081  722185 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 19:12:27.596079  722185 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 19:12:27.596158  722185 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0916 19:12:27.596199  722185 kubeadm.go:310] OS: Linux
	I0916 19:12:27.596248  722185 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 19:12:27.596300  722185 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 19:12:27.596351  722185 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 19:12:27.596404  722185 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 19:12:27.596462  722185 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 19:12:27.596514  722185 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 19:12:27.596563  722185 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 19:12:27.596617  722185 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 19:12:27.596667  722185 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 19:12:27.666390  722185 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 19:12:27.666531  722185 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 19:12:27.666640  722185 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 19:12:27.672050  722185 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 19:12:27.675572  722185 out.go:235]   - Generating certificates and keys ...
	I0916 19:12:27.675787  722185 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 19:12:27.675868  722185 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 19:12:28.044991  722185 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 19:12:28.188017  722185 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 19:12:28.701837  722185 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 19:12:28.979963  722185 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 19:12:29.148074  722185 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 19:12:29.148266  722185 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-350900 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 19:12:29.721323  722185 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 19:12:29.721464  722185 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-350900 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 19:12:29.957055  722185 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 19:12:30.137954  722185 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 19:12:30.694022  722185 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 19:12:30.694182  722185 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 19:12:30.939072  722185 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 19:12:31.303404  722185 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 19:12:31.445488  722185 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 19:12:31.685489  722185 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 19:12:31.861732  722185 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 19:12:31.862526  722185 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 19:12:31.866111  722185 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 19:12:31.869549  722185 out.go:235]   - Booting up control plane ...
	I0916 19:12:31.869661  722185 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 19:12:31.869745  722185 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 19:12:31.870774  722185 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 19:12:31.884896  722185 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 19:12:31.891814  722185 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 19:12:31.891875  722185 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 19:12:31.998734  722185 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 19:12:31.998884  722185 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 19:12:34.000381  722185 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001651849s
	I0916 19:12:34.000471  722185 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 19:12:40.007648  722185 kubeadm.go:310] [api-check] The API server is healthy after 6.002349031s
	I0916 19:12:40.040640  722185 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 19:12:40.060420  722185 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 19:12:40.093063  722185 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 19:12:40.093264  722185 kubeadm.go:310] [mark-control-plane] Marking the node addons-350900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 19:12:40.109948  722185 kubeadm.go:310] [bootstrap-token] Using token: jb14dy.wn28fp2z12n6ssay
	I0916 19:12:40.112710  722185 out.go:235]   - Configuring RBAC rules ...
	I0916 19:12:40.112855  722185 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 19:12:40.122502  722185 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 19:12:40.133553  722185 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 19:12:40.142917  722185 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 19:12:40.149912  722185 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 19:12:40.157821  722185 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 19:12:40.409678  722185 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 19:12:40.837924  722185 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 19:12:41.409579  722185 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 19:12:41.410623  722185 kubeadm.go:310] 
	I0916 19:12:41.410702  722185 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 19:12:41.410717  722185 kubeadm.go:310] 
	I0916 19:12:41.410800  722185 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 19:12:41.410808  722185 kubeadm.go:310] 
	I0916 19:12:41.410835  722185 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 19:12:41.410897  722185 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 19:12:41.410951  722185 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 19:12:41.410960  722185 kubeadm.go:310] 
	I0916 19:12:41.411025  722185 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 19:12:41.411037  722185 kubeadm.go:310] 
	I0916 19:12:41.411085  722185 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 19:12:41.411093  722185 kubeadm.go:310] 
	I0916 19:12:41.411145  722185 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 19:12:41.411223  722185 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 19:12:41.411294  722185 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 19:12:41.411303  722185 kubeadm.go:310] 
	I0916 19:12:41.411408  722185 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 19:12:41.411489  722185 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 19:12:41.411500  722185 kubeadm.go:310] 
	I0916 19:12:41.411583  722185 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jb14dy.wn28fp2z12n6ssay \
	I0916 19:12:41.411688  722185 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cc5585979003d23940f3a243a8957ced2e239d06dc657563b0932f1aecc73f83 \
	I0916 19:12:41.411712  722185 kubeadm.go:310] 	--control-plane 
	I0916 19:12:41.411721  722185 kubeadm.go:310] 
	I0916 19:12:41.411805  722185 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 19:12:41.411812  722185 kubeadm.go:310] 
	I0916 19:12:41.411892  722185 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jb14dy.wn28fp2z12n6ssay \
	I0916 19:12:41.411993  722185 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cc5585979003d23940f3a243a8957ced2e239d06dc657563b0932f1aecc73f83 
	I0916 19:12:41.414757  722185 kubeadm.go:310] W0916 19:12:27.568311    1028 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 19:12:41.415052  722185 kubeadm.go:310] W0916 19:12:27.569434    1028 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 19:12:41.415266  722185 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0916 19:12:41.415396  722185 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 19:12:41.415419  722185 cni.go:84] Creating CNI manager for ""
	I0916 19:12:41.415430  722185 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 19:12:41.419964  722185 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 19:12:41.422698  722185 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 19:12:41.426792  722185 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 19:12:41.426815  722185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 19:12:41.445570  722185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 19:12:41.732866  722185 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 19:12:41.733003  722185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:12:41.733084  722185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-350900 minikube.k8s.io/updated_at=2024_09_16T19_12_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=addons-350900 minikube.k8s.io/primary=true
	I0916 19:12:42.001571  722185 ops.go:34] apiserver oom_adj: -16
	I0916 19:12:42.001685  722185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:12:42.502406  722185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:12:43.002786  722185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:12:43.502665  722185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:12:44.002467  722185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:12:44.501840  722185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:12:45.002629  722185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:12:45.502344  722185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:12:45.608835  722185 kubeadm.go:1113] duration metric: took 3.875872801s to wait for elevateKubeSystemPrivileges
	I0916 19:12:45.608868  722185 kubeadm.go:394] duration metric: took 18.225682051s to StartCluster
	I0916 19:12:45.608886  722185 settings.go:142] acquiring lock: {Name:mk07aae78f50a6c58469ab3950475223131150bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:12:45.609020  722185 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19649-716050/kubeconfig
	I0916 19:12:45.609385  722185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/kubeconfig: {Name:mk8f5a792b67bd8f95cfe5b13b3ce4d720aa03a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:12:45.610015  722185 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 19:12:45.610060  722185 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 19:12:45.610299  722185 config.go:182] Loaded profile config "addons-350900": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 19:12:45.610330  722185 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 19:12:45.610412  722185 addons.go:69] Setting yakd=true in profile "addons-350900"
	I0916 19:12:45.610425  722185 addons.go:234] Setting addon yakd=true in "addons-350900"
	I0916 19:12:45.610449  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.610897  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.611452  722185 addons.go:69] Setting metrics-server=true in profile "addons-350900"
	I0916 19:12:45.611465  722185 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-350900"
	I0916 19:12:45.611476  722185 addons.go:69] Setting cloud-spanner=true in profile "addons-350900"
	I0916 19:12:45.611485  722185 addons.go:234] Setting addon cloud-spanner=true in "addons-350900"
	I0916 19:12:45.611472  722185 addons.go:234] Setting addon metrics-server=true in "addons-350900"
	I0916 19:12:45.611509  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.611514  722185 addons.go:69] Setting registry=true in profile "addons-350900"
	I0916 19:12:45.611530  722185 addons.go:234] Setting addon registry=true in "addons-350900"
	I0916 19:12:45.611546  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.611949  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.611974  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.615411  722185 addons.go:69] Setting storage-provisioner=true in profile "addons-350900"
	I0916 19:12:45.615482  722185 addons.go:234] Setting addon storage-provisioner=true in "addons-350900"
	I0916 19:12:45.615554  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.617588  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.620160  722185 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-350900"
	I0916 19:12:45.620232  722185 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-350900"
	I0916 19:12:45.620266  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.620743  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.621034  722185 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-350900"
	I0916 19:12:45.621056  722185 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-350900"
	I0916 19:12:45.621343  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.627783  722185 addons.go:69] Setting default-storageclass=true in profile "addons-350900"
	I0916 19:12:45.627865  722185 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-350900"
	I0916 19:12:45.628237  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.652722  722185 addons.go:69] Setting volcano=true in profile "addons-350900"
	I0916 19:12:45.652754  722185 addons.go:234] Setting addon volcano=true in "addons-350900"
	I0916 19:12:45.652796  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.653281  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.655863  722185 addons.go:69] Setting gcp-auth=true in profile "addons-350900"
	I0916 19:12:45.655905  722185 mustload.go:65] Loading cluster: addons-350900
	I0916 19:12:45.656107  722185 config.go:182] Loaded profile config "addons-350900": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 19:12:45.656371  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.677406  722185 addons.go:69] Setting ingress=true in profile "addons-350900"
	I0916 19:12:45.677504  722185 addons.go:234] Setting addon ingress=true in "addons-350900"
	I0916 19:12:45.677594  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.685498  722185 addons.go:69] Setting volumesnapshots=true in profile "addons-350900"
	I0916 19:12:45.701801  722185 addons.go:234] Setting addon volumesnapshots=true in "addons-350900"
	I0916 19:12:45.701841  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.702333  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.611510  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.611485  722185 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-350900"
	I0916 19:12:45.685769  722185 out.go:177] * Verifying Kubernetes components...
	I0916 19:12:45.691893  722185 addons.go:69] Setting ingress-dns=true in profile "addons-350900"
	I0916 19:12:45.691908  722185 addons.go:69] Setting inspektor-gadget=true in profile "addons-350900"
	I0916 19:12:45.693583  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.731339  722185 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 19:12:45.732946  722185 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 19:12:45.733854  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.743512  722185 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 19:12:45.745081  722185 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 19:12:45.745219  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:45.744615  722185 addons.go:234] Setting addon ingress-dns=true in "addons-350900"
	I0916 19:12:45.745561  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.744678  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.746482  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.744747  722185 addons.go:234] Setting addon inspektor-gadget=true in "addons-350900"
	I0916 19:12:45.773736  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.774250  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.778282  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.799208  722185 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 19:12:45.802880  722185 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 19:12:45.802909  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 19:12:45.802971  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:45.818961  722185 addons.go:234] Setting addon default-storageclass=true in "addons-350900"
	I0916 19:12:45.819009  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.819459  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.833140  722185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:12:45.833968  722185 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 19:12:45.837963  722185 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 19:12:45.869572  722185 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 19:12:45.873402  722185 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 19:12:45.873472  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 19:12:45.873576  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:45.873816  722185 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 19:12:45.877349  722185 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 19:12:45.882216  722185 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 19:12:45.885359  722185 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 19:12:45.892155  722185 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 19:12:45.892181  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 19:12:45.892245  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:45.895017  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.903130  722185 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 19:12:45.903830  722185 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 19:12:45.905062  722185 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-350900"
	I0916 19:12:45.905099  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:45.905522  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:45.920448  722185 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 19:12:45.929084  722185 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 19:12:45.929707  722185 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 19:12:45.931674  722185 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 19:12:45.931697  722185 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 19:12:45.931773  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:45.952317  722185 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 19:12:45.956097  722185 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 19:12:45.956121  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 19:12:45.956186  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:46.000845  722185 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 19:12:46.010854  722185 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 19:12:46.014184  722185 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 19:12:46.031576  722185 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 19:12:46.036281  722185 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 19:12:46.036307  722185 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 19:12:46.036388  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:46.040002  722185 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 19:12:46.040034  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 19:12:46.040124  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:46.051188  722185 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 19:12:46.051210  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:46.055545  722185 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 19:12:46.055812  722185 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 19:12:46.055828  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 19:12:46.055892  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:46.058131  722185 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 19:12:46.058244  722185 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 19:12:46.058417  722185 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 19:12:46.058432  722185 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 19:12:46.058511  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:46.071766  722185 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 19:12:46.071788  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 19:12:46.071852  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:46.088680  722185 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 19:12:46.088729  722185 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 19:12:46.088794  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:46.089397  722185 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 19:12:46.089414  722185 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 19:12:46.089463  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:46.115611  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:46.127550  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:46.128672  722185 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 19:12:46.131745  722185 out.go:177]   - Using image docker.io/busybox:stable
	I0916 19:12:46.137322  722185 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 19:12:46.139214  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:46.143509  722185 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 19:12:46.143535  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 19:12:46.143603  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:46.199418  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:46.214667  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:46.215065  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:46.239133  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:46.253450  722185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 19:12:46.254128  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:46.270052  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:46.294778  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:46.298714  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:46.308478  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:46.308550  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	W0916 19:12:46.329043  722185 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 19:12:46.329074  722185 retry.go:31] will retry after 141.027463ms: ssh: handshake failed: EOF
	I0916 19:12:46.508439  722185 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 19:12:46.508463  722185 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 19:12:46.666522  722185 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 19:12:46.666601  722185 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 19:12:46.768998  722185 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 19:12:46.769074  722185 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 19:12:46.886167  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 19:12:46.963072  722185 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 19:12:46.963095  722185 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 19:12:47.067478  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 19:12:47.083245  722185 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 19:12:47.083269  722185 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 19:12:47.106462  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 19:12:47.111522  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 19:12:47.145542  722185 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 19:12:47.145615  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 19:12:47.147562  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 19:12:47.164769  722185 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 19:12:47.164842  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 19:12:47.184583  722185 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 19:12:47.184660  722185 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 19:12:47.204325  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 19:12:47.211119  722185 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 19:12:47.211192  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 19:12:47.286076  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 19:12:47.301787  722185 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 19:12:47.301861  722185 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 19:12:47.308435  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 19:12:47.310193  722185 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 19:12:47.310211  722185 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 19:12:47.313349  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 19:12:47.444126  722185 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 19:12:47.444192  722185 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 19:12:47.463206  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 19:12:47.503856  722185 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 19:12:47.503933  722185 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 19:12:47.521690  722185 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 19:12:47.521763  722185 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 19:12:47.556523  722185 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 19:12:47.556598  722185 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 19:12:47.628025  722185 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 19:12:47.628101  722185 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 19:12:47.727244  722185 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 19:12:47.727369  722185 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 19:12:47.728679  722185 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 19:12:47.728758  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 19:12:47.756698  722185 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 19:12:47.756773  722185 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 19:12:47.835263  722185 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 19:12:47.835382  722185 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 19:12:47.908825  722185 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.780124047s)
	I0916 19:12:47.908951  722185 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 19:12:47.908915  722185 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.655443085s)
	I0916 19:12:47.910123  722185 node_ready.go:35] waiting up to 6m0s for node "addons-350900" to be "Ready" ...
	I0916 19:12:47.913154  722185 node_ready.go:49] node "addons-350900" has status "Ready":"True"
	I0916 19:12:47.913217  722185 node_ready.go:38] duration metric: took 3.039995ms for node "addons-350900" to be "Ready" ...
	I0916 19:12:47.913241  722185 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 19:12:47.922605  722185 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9p5mr" in "kube-system" namespace to be "Ready" ...
	I0916 19:12:48.058805  722185 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 19:12:48.058835  722185 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 19:12:48.084975  722185 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 19:12:48.085004  722185 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 19:12:48.117389  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 19:12:48.162005  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 19:12:48.172002  722185 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 19:12:48.172031  722185 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 19:12:48.235211  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.348956946s)
	I0916 19:12:48.413840  722185 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-350900" context rescaled to 1 replicas
	I0916 19:12:48.435957  722185 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 19:12:48.436032  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 19:12:48.458603  722185 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 19:12:48.458684  722185 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 19:12:48.480743  722185 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 19:12:48.480819  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 19:12:48.582746  722185 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 19:12:48.582822  722185 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 19:12:48.698064  722185 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 19:12:48.698140  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 19:12:48.805047  722185 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 19:12:48.805137  722185 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 19:12:48.925942  722185 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-9p5mr" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-9p5mr" not found
	I0916 19:12:48.926015  722185 pod_ready.go:82] duration metric: took 1.003323217s for pod "coredns-7c65d6cfc9-9p5mr" in "kube-system" namespace to be "Ready" ...
	E0916 19:12:48.926070  722185 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-9p5mr" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-9p5mr" not found
	I0916 19:12:48.926093  722185 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace to be "Ready" ...
	I0916 19:12:49.119881  722185 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 19:12:49.119908  722185 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 19:12:49.163863  722185 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 19:12:49.163889  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 19:12:49.607750  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 19:12:49.622331  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 19:12:50.447109  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.379507796s)
	I0916 19:12:50.447228  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.340697734s)
	I0916 19:12:50.939556  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:12:53.113506  722185 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 19:12:53.113595  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:53.145008  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:53.433455  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:12:53.519877  722185 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 19:12:53.544541  722185 addons.go:234] Setting addon gcp-auth=true in "addons-350900"
	I0916 19:12:53.544596  722185 host.go:66] Checking if "addons-350900" exists ...
	I0916 19:12:53.545060  722185 cli_runner.go:164] Run: docker container inspect addons-350900 --format={{.State.Status}}
	I0916 19:12:53.571032  722185 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 19:12:53.571094  722185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-350900
	I0916 19:12:53.615647  722185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/addons-350900/id_rsa Username:docker}
	I0916 19:12:55.441681  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:12:55.885111  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.773508967s)
	I0916 19:12:55.885198  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.737575794s)
	I0916 19:12:55.885241  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.680845345s)
	I0916 19:12:55.885416  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.599267657s)
	I0916 19:12:55.885476  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.577022664s)
	I0916 19:12:55.885653  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.572269835s)
	I0916 19:12:55.885685  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.422390783s)
	I0916 19:12:55.885705  722185 addons.go:475] Verifying addon registry=true in "addons-350900"
	I0916 19:12:55.885729  722185 addons.go:475] Verifying addon ingress=true in "addons-350900"
	I0916 19:12:55.886200  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.768577698s)
	W0916 19:12:55.886253  722185 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 19:12:55.886270  722185 retry.go:31] will retry after 314.470027ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 19:12:55.886635  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.724576562s)
	I0916 19:12:55.886655  722185 addons.go:475] Verifying addon metrics-server=true in "addons-350900"
	I0916 19:12:55.886747  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.278962088s)
	I0916 19:12:55.890810  722185 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-350900 service yakd-dashboard -n yakd-dashboard
	
	I0916 19:12:55.892222  722185 out.go:177] * Verifying registry addon...
	I0916 19:12:55.892292  722185 out.go:177] * Verifying ingress addon...
	I0916 19:12:55.895693  722185 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 19:12:55.895753  722185 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 19:12:55.924396  722185 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 19:12:55.924424  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:12:55.926099  722185 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 19:12:55.926122  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:12:56.201190  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 19:12:56.416614  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:12:56.417259  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:12:56.620550  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.99816923s)
	I0916 19:12:56.620585  722185 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-350900"
	I0916 19:12:56.620767  722185 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.049711101s)
	I0916 19:12:56.623808  722185 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 19:12:56.623871  722185 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 19:12:56.626559  722185 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 19:12:56.627547  722185 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 19:12:56.629494  722185 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 19:12:56.629522  722185 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 19:12:56.635682  722185 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 19:12:56.635707  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:12:56.686823  722185 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 19:12:56.686849  722185 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 19:12:56.722004  722185 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 19:12:56.722030  722185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 19:12:56.812775  722185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 19:12:56.901625  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:12:56.902617  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:12:57.141728  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:12:57.408075  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:12:57.411162  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:12:57.452897  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:12:57.634927  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:12:57.901678  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:12:57.902974  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:12:58.000570  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.799327124s)
	I0916 19:12:58.000667  722185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.187869357s)
	I0916 19:12:58.003498  722185 addons.go:475] Verifying addon gcp-auth=true in "addons-350900"
	I0916 19:12:58.006363  722185 out.go:177] * Verifying gcp-auth addon...
	I0916 19:12:58.010064  722185 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 19:12:58.013116  722185 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 19:12:58.133647  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:12:58.401218  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:12:58.402209  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:12:58.632857  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:12:58.902263  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:12:58.903939  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:12:59.134108  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:12:59.401503  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:12:59.402949  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:12:59.633609  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:12:59.901497  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:12:59.902061  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:12:59.932592  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:00.138863  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:00.403565  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:00.403960  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:00.632735  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:00.902516  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:00.902922  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:01.136100  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:01.402614  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:01.404765  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:01.632998  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:01.902848  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:01.904007  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:01.933412  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:02.133149  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:02.403175  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:02.404612  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:02.633765  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:02.900278  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:02.901278  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:03.133442  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:03.402208  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:03.403562  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:03.633604  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:03.916134  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:03.917839  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:03.934678  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:04.133695  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:04.401744  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:04.402013  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:04.632181  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:04.900581  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:04.901510  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:05.134816  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:05.402125  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:05.402865  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:05.632795  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:05.902540  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:05.904187  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:05.936079  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:06.134447  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:06.402442  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:06.403096  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:06.632972  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:06.902130  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:06.903897  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:07.132694  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:07.400541  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:07.402227  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:07.632227  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:07.901448  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:07.902399  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:08.132731  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:08.399768  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:08.401029  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:08.432085  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:08.632023  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:08.899862  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:08.900831  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:09.132710  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:09.400217  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:09.401349  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:09.632108  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:09.900498  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:09.901457  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:10.132655  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:10.399928  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:10.400648  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:10.632761  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:10.900396  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:10.901356  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:10.932528  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:11.133398  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:11.400202  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:11.402234  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:11.633596  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:11.901337  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:11.901880  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:12.132783  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:12.401240  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:12.401812  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:12.632252  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:12.900670  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:12.901622  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:13.132725  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:13.399278  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:13.400734  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:13.432653  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:13.632797  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:13.900808  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:13.901814  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:14.132942  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:14.400892  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:14.401900  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:14.632846  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:14.900092  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:14.901513  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:15.142393  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:15.399952  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:15.400546  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:15.433078  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:15.632924  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:15.901086  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:15.901625  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:16.132687  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:16.401117  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:16.401996  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:16.631866  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:16.900133  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:16.901437  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:17.132756  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:17.400487  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:17.401481  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:17.433263  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:17.633029  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:17.900131  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:17.900595  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:18.132758  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:18.400820  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:18.402915  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:18.632737  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:18.902461  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:18.902854  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:19.132591  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:19.400869  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:19.401387  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:19.632161  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:19.900668  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:19.901642  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:19.932851  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:20.132561  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:20.399383  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:20.400998  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:20.632160  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:20.899825  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:20.900936  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:21.132044  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:21.400600  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:21.401236  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:21.633193  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:21.901306  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:21.901841  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:22.132551  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:22.400667  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:22.401625  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:22.432841  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:22.633453  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:22.900012  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:22.901026  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:23.132577  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:23.400638  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:23.401958  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:23.631997  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:23.900078  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:23.900736  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:24.132591  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:24.400391  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:24.400936  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:24.632635  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:24.901008  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:24.903659  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:24.936513  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:25.135112  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:25.403606  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:25.413986  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:25.632785  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:25.903883  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:25.905463  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:26.132194  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:26.400811  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:26.401442  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:26.632249  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:26.899767  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:26.900943  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:27.132693  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:27.400786  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:27.401582  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:27.432692  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:27.632663  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:27.900614  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:27.902180  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:28.132556  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:28.400761  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:28.401340  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:28.632504  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:28.902920  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:28.903635  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:29.132374  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:29.400513  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:29.401443  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:29.632876  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:29.902282  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:29.903782  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:29.932760  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:30.132726  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:30.400505  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:30.400951  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:30.632346  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:30.901302  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:30.901851  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:31.132750  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:31.401084  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:31.401642  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:31.632145  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:31.902923  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:31.903145  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:31.933212  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:32.133429  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:32.401866  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:32.403831  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:32.632391  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:32.901017  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:32.902936  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:33.133494  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:33.401537  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:33.402236  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:33.633099  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:33.900988  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:33.902081  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:33.938088  722185 pod_ready.go:103] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"False"
	I0916 19:13:34.133475  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:34.399137  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:34.400821  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:34.432379  722185 pod_ready.go:93] pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace has status "Ready":"True"
	I0916 19:13:34.432407  722185 pod_ready.go:82] duration metric: took 45.506282294s for pod "coredns-7c65d6cfc9-bclls" in "kube-system" namespace to be "Ready" ...
	I0916 19:13:34.432421  722185 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-350900" in "kube-system" namespace to be "Ready" ...
	I0916 19:13:34.437663  722185 pod_ready.go:93] pod "etcd-addons-350900" in "kube-system" namespace has status "Ready":"True"
	I0916 19:13:34.437690  722185 pod_ready.go:82] duration metric: took 5.261201ms for pod "etcd-addons-350900" in "kube-system" namespace to be "Ready" ...
	I0916 19:13:34.437705  722185 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-350900" in "kube-system" namespace to be "Ready" ...
	I0916 19:13:34.442977  722185 pod_ready.go:93] pod "kube-apiserver-addons-350900" in "kube-system" namespace has status "Ready":"True"
	I0916 19:13:34.443003  722185 pod_ready.go:82] duration metric: took 5.289286ms for pod "kube-apiserver-addons-350900" in "kube-system" namespace to be "Ready" ...
	I0916 19:13:34.443017  722185 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-350900" in "kube-system" namespace to be "Ready" ...
	I0916 19:13:34.448559  722185 pod_ready.go:93] pod "kube-controller-manager-addons-350900" in "kube-system" namespace has status "Ready":"True"
	I0916 19:13:34.448584  722185 pod_ready.go:82] duration metric: took 5.553814ms for pod "kube-controller-manager-addons-350900" in "kube-system" namespace to be "Ready" ...
	I0916 19:13:34.448597  722185 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5q9cd" in "kube-system" namespace to be "Ready" ...
	I0916 19:13:34.453789  722185 pod_ready.go:93] pod "kube-proxy-5q9cd" in "kube-system" namespace has status "Ready":"True"
	I0916 19:13:34.453813  722185 pod_ready.go:82] duration metric: took 5.207327ms for pod "kube-proxy-5q9cd" in "kube-system" namespace to be "Ready" ...
	I0916 19:13:34.453824  722185 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-350900" in "kube-system" namespace to be "Ready" ...
	I0916 19:13:34.633194  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:34.830556  722185 pod_ready.go:93] pod "kube-scheduler-addons-350900" in "kube-system" namespace has status "Ready":"True"
	I0916 19:13:34.830583  722185 pod_ready.go:82] duration metric: took 376.751795ms for pod "kube-scheduler-addons-350900" in "kube-system" namespace to be "Ready" ...
	I0916 19:13:34.830593  722185 pod_ready.go:39] duration metric: took 46.917328174s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 19:13:34.830610  722185 api_server.go:52] waiting for apiserver process to appear ...
	I0916 19:13:34.830671  722185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 19:13:34.849063  722185 api_server.go:72] duration metric: took 49.238968963s to wait for apiserver process to appear ...
	I0916 19:13:34.849138  722185 api_server.go:88] waiting for apiserver healthz status ...
	I0916 19:13:34.849175  722185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 19:13:34.858969  722185 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 19:13:34.860274  722185 api_server.go:141] control plane version: v1.31.1
	I0916 19:13:34.860339  722185 api_server.go:131] duration metric: took 11.180266ms to wait for apiserver health ...
	I0916 19:13:34.860364  722185 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 19:13:34.903795  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:34.905371  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:35.040766  722185 system_pods.go:59] 18 kube-system pods found
	I0916 19:13:35.040865  722185 system_pods.go:61] "coredns-7c65d6cfc9-bclls" [2feb9570-24ab-4f35-b6b6-958fe24c7c04] Running
	I0916 19:13:35.040878  722185 system_pods.go:61] "csi-hostpath-attacher-0" [419ed2b3-0b65-4027-9234-89815031c02e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 19:13:35.040888  722185 system_pods.go:61] "csi-hostpath-resizer-0" [c6024dd7-15a6-4b51-a68d-f6498eb431d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 19:13:35.040898  722185 system_pods.go:61] "csi-hostpathplugin-jnlrh" [8e2d78fb-8907-4b5e-bac2-f89cd1fef345] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 19:13:35.040909  722185 system_pods.go:61] "etcd-addons-350900" [70f00fba-4b31-4a57-8935-a3b93871c090] Running
	I0916 19:13:35.040922  722185 system_pods.go:61] "kindnet-pcskg" [7c04db60-1933-4251-8bd0-a81481231991] Running
	I0916 19:13:35.040927  722185 system_pods.go:61] "kube-apiserver-addons-350900" [1128bed3-248e-4f2e-b470-9407f06e7d4c] Running
	I0916 19:13:35.040932  722185 system_pods.go:61] "kube-controller-manager-addons-350900" [c60f8149-a592-4663-8ebc-ebb5d225e9b6] Running
	I0916 19:13:35.040943  722185 system_pods.go:61] "kube-ingress-dns-minikube" [15259ed8-5fe4-410f-bdec-4be3317c3dae] Running
	I0916 19:13:35.040947  722185 system_pods.go:61] "kube-proxy-5q9cd" [14c194d2-0e04-4f79-b8a4-7956f182e00b] Running
	I0916 19:13:35.040952  722185 system_pods.go:61] "kube-scheduler-addons-350900" [971e8f37-5245-41f4-8dac-4844c07ea380] Running
	I0916 19:13:35.040965  722185 system_pods.go:61] "metrics-server-84c5f94fbc-vp94l" [81bed691-751a-44db-8189-67b355235987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 19:13:35.040974  722185 system_pods.go:61] "nvidia-device-plugin-daemonset-4vbhs" [b56b0b22-520b-4e8e-b4c1-4f9fb8b9f945] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 19:13:35.040984  722185 system_pods.go:61] "registry-66c9cd494c-fvpw8" [171ab3c7-51d1-4291-9dba-020409c54d0f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 19:13:35.040992  722185 system_pods.go:61] "registry-proxy-ttbjb" [85f2d6eb-dda8-4b16-a66c-74e84652b805] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 19:13:35.041004  722185 system_pods.go:61] "snapshot-controller-56fcc65765-b587t" [2cb5531e-05f2-4f59-a1c2-e4edc7ab1d6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 19:13:35.041011  722185 system_pods.go:61] "snapshot-controller-56fcc65765-pwln9" [4b63166b-13f0-4afd-8c1e-cba71dc3deeb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 19:13:35.041020  722185 system_pods.go:61] "storage-provisioner" [046c453a-54fd-4201-bde2-588872eedf58] Running
	I0916 19:13:35.041027  722185 system_pods.go:74] duration metric: took 180.643835ms to wait for pod list to return data ...
	I0916 19:13:35.041043  722185 default_sa.go:34] waiting for default service account to be created ...
	I0916 19:13:35.134216  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:35.230943  722185 default_sa.go:45] found service account: "default"
	I0916 19:13:35.230974  722185 default_sa.go:55] duration metric: took 189.924127ms for default service account to be created ...
	I0916 19:13:35.230983  722185 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 19:13:35.402114  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:35.402766  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:35.439007  722185 system_pods.go:86] 18 kube-system pods found
	I0916 19:13:35.439048  722185 system_pods.go:89] "coredns-7c65d6cfc9-bclls" [2feb9570-24ab-4f35-b6b6-958fe24c7c04] Running
	I0916 19:13:35.439060  722185 system_pods.go:89] "csi-hostpath-attacher-0" [419ed2b3-0b65-4027-9234-89815031c02e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 19:13:35.439067  722185 system_pods.go:89] "csi-hostpath-resizer-0" [c6024dd7-15a6-4b51-a68d-f6498eb431d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 19:13:35.439076  722185 system_pods.go:89] "csi-hostpathplugin-jnlrh" [8e2d78fb-8907-4b5e-bac2-f89cd1fef345] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 19:13:35.439081  722185 system_pods.go:89] "etcd-addons-350900" [70f00fba-4b31-4a57-8935-a3b93871c090] Running
	I0916 19:13:35.439087  722185 system_pods.go:89] "kindnet-pcskg" [7c04db60-1933-4251-8bd0-a81481231991] Running
	I0916 19:13:35.439091  722185 system_pods.go:89] "kube-apiserver-addons-350900" [1128bed3-248e-4f2e-b470-9407f06e7d4c] Running
	I0916 19:13:35.439096  722185 system_pods.go:89] "kube-controller-manager-addons-350900" [c60f8149-a592-4663-8ebc-ebb5d225e9b6] Running
	I0916 19:13:35.439108  722185 system_pods.go:89] "kube-ingress-dns-minikube" [15259ed8-5fe4-410f-bdec-4be3317c3dae] Running
	I0916 19:13:35.439125  722185 system_pods.go:89] "kube-proxy-5q9cd" [14c194d2-0e04-4f79-b8a4-7956f182e00b] Running
	I0916 19:13:35.439130  722185 system_pods.go:89] "kube-scheduler-addons-350900" [971e8f37-5245-41f4-8dac-4844c07ea380] Running
	I0916 19:13:35.439136  722185 system_pods.go:89] "metrics-server-84c5f94fbc-vp94l" [81bed691-751a-44db-8189-67b355235987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 19:13:35.439145  722185 system_pods.go:89] "nvidia-device-plugin-daemonset-4vbhs" [b56b0b22-520b-4e8e-b4c1-4f9fb8b9f945] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 19:13:35.439152  722185 system_pods.go:89] "registry-66c9cd494c-fvpw8" [171ab3c7-51d1-4291-9dba-020409c54d0f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 19:13:35.439160  722185 system_pods.go:89] "registry-proxy-ttbjb" [85f2d6eb-dda8-4b16-a66c-74e84652b805] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 19:13:35.439167  722185 system_pods.go:89] "snapshot-controller-56fcc65765-b587t" [2cb5531e-05f2-4f59-a1c2-e4edc7ab1d6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 19:13:35.439176  722185 system_pods.go:89] "snapshot-controller-56fcc65765-pwln9" [4b63166b-13f0-4afd-8c1e-cba71dc3deeb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 19:13:35.439180  722185 system_pods.go:89] "storage-provisioner" [046c453a-54fd-4201-bde2-588872eedf58] Running
	I0916 19:13:35.439189  722185 system_pods.go:126] duration metric: took 208.198943ms to wait for k8s-apps to be running ...
	I0916 19:13:35.439196  722185 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 19:13:35.439255  722185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 19:13:35.454581  722185 system_svc.go:56] duration metric: took 15.374494ms WaitForService to wait for kubelet
	I0916 19:13:35.454619  722185 kubeadm.go:582] duration metric: took 49.844524986s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 19:13:35.454639  722185 node_conditions.go:102] verifying NodePressure condition ...
	I0916 19:13:35.633786  722185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0916 19:13:35.633821  722185 node_conditions.go:123] node cpu capacity is 2
	I0916 19:13:35.633836  722185 node_conditions.go:105] duration metric: took 179.190663ms to run NodePressure ...
	I0916 19:13:35.633849  722185 start.go:241] waiting for startup goroutines ...
	I0916 19:13:35.634245  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:35.901329  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:35.902905  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:36.132767  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:36.402430  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:36.402997  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:36.633331  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:36.901001  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:36.901352  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:37.133191  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:37.401212  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:37.402786  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:37.632007  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:37.901942  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:37.903551  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:38.132704  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:38.400359  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:38.400992  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:38.633518  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:38.900385  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:38.902347  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:39.134463  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:39.400598  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:39.402464  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:39.632831  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:39.902436  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:39.903105  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:40.142322  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:40.402572  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:40.403496  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:40.632398  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:40.900919  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:40.901360  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:41.132913  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:41.403348  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:41.404215  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:41.633115  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:41.901391  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:41.901824  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:42.132605  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:42.400453  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:42.401394  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:42.632167  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:42.900866  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:42.901998  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:43.138406  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:43.401114  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:43.402577  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:43.632588  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:43.900354  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:43.901404  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:44.132886  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:44.401035  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:44.401650  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:44.632054  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:44.902763  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:44.905464  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:45.139170  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:45.401078  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:45.401650  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:45.633003  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:45.901092  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:45.902426  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:46.133242  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:46.401232  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:46.402685  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:46.632405  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:46.902002  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:46.903296  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:47.139586  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:47.400469  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:47.401436  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:47.632719  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:47.900665  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:47.901369  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:13:48.132705  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:48.400536  722185 kapi.go:107] duration metric: took 52.504841093s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 19:13:48.402060  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:48.635328  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:48.900846  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:49.137141  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:49.400615  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:49.632906  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:49.900145  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:50.132594  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:50.399981  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:50.633003  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:50.899861  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:51.132795  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:51.400216  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:51.632724  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:51.901245  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:52.133039  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:52.400899  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:52.632358  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:52.900346  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:53.132012  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:53.401371  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:53.632587  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:53.900178  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:54.133675  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:54.402468  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:54.634389  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:54.900702  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:55.133930  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:55.402233  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:55.632747  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:55.902200  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:56.132385  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:56.403555  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:56.631794  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:56.900157  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:57.132428  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:57.400681  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:57.633318  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:57.902529  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:58.134792  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:58.400973  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:58.633128  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:58.901642  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:59.132916  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:59.400728  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:13:59.632980  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:13:59.900348  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:00.138797  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:00.402023  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:00.632460  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:00.900010  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:01.132516  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:01.402155  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:01.633735  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:01.900456  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:02.156139  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:02.400797  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:02.632833  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:02.900470  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:03.133143  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:03.401219  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:03.633657  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:03.900083  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:04.132850  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:04.405261  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:04.632568  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:04.901720  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:05.132497  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:05.400720  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:05.633775  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:05.900794  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:06.133526  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:06.401418  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:06.634090  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:06.902514  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:07.132761  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:07.407404  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:07.634504  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:07.901376  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:08.132282  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:08.400663  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:08.631752  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:08.901027  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:09.132605  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:09.402979  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:09.634077  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:09.900771  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:10.132441  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:10.401171  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:10.632898  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:10.900263  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:11.136597  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:11.400799  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:11.631856  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:11.900615  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:12.132311  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:12.400812  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:12.633498  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:12.901406  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:13.132693  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:14:13.403119  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:13.632491  722185 kapi.go:107] duration metric: took 1m17.004943993s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 19:14:13.899865  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:14.400891  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:14.899581  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:15.401120  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:15.900269  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:16.400501  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:16.900569  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:17.400617  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:17.900581  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:18.401330  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:18.900619  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:19.400138  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:19.900039  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:20.400955  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:20.900306  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:21.400568  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:21.900216  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:22.400105  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:22.899621  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:23.400481  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:23.900358  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:24.400264  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:24.900344  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:25.399895  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:25.900150  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:26.400562  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:26.900216  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:27.402176  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:27.899823  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:28.400279  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:28.900755  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:29.401021  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:29.900094  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:30.401122  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:30.899917  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:31.400130  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:31.899790  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:32.400853  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:32.900917  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:33.399553  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:33.899943  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:34.400656  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:34.899647  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:35.400236  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:35.900522  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:36.401040  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:36.900069  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:37.400875  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:37.899669  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:38.400835  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:38.900036  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:39.400739  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:39.900466  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:40.401105  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:40.899806  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:41.399723  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:41.900244  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:42.401105  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:42.899780  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:43.399745  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:43.901091  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:44.400541  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:44.900230  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:45.401550  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:45.899739  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:46.399984  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:46.900438  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:47.400027  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:47.900164  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:48.401034  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:48.900092  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:49.400224  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:49.900160  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:50.400447  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:50.900254  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:51.400596  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:51.900665  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:52.400776  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:52.900580  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:53.400724  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:53.900572  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:54.401423  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:54.900843  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:55.400599  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:55.900591  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:56.400168  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:56.899791  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:57.400362  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:57.900384  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:58.401069  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:58.900356  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:59.400863  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:14:59.900316  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:00.400611  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:00.900402  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:01.400888  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:01.902743  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:02.401300  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:02.901402  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:03.400016  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:03.901052  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:04.400067  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:04.901509  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:05.400851  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:05.900587  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:06.400142  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:06.900441  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:07.401007  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:07.899625  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:08.400015  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:08.901372  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:09.400908  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:09.900290  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:10.402136  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:10.901409  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:11.400100  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:11.900477  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:12.400165  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:12.900754  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:13.402208  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:13.900742  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:14.400956  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:14.901029  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:15.400268  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:15.900389  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:16.400683  722185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:15:16.901278  722185 kapi.go:107] duration metric: took 2m21.005566182s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 19:15:43.013805  722185 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 19:15:43.013835  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:43.514474  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:44.014385  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:44.514689  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:45.015893  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:45.513703  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:46.014116  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:46.513673  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:47.013110  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:47.513845  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:48.015669  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:48.513496  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:49.014655  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:49.514067  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:50.015266  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:50.514474  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:51.013806  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:51.513815  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:52.014035  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:52.514706  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:53.013750  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:53.513098  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:54.014488  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:54.513718  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:55.015236  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:55.514352  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:56.014147  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:56.514007  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:57.014014  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:57.513566  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:58.014197  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:58.513952  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:59.014011  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:15:59.513705  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:00.026071  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:00.514135  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:01.014047  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:01.516654  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:02.014278  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:02.513558  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:03.014754  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:03.513502  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:04.015270  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:04.513502  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:05.018123  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:05.513904  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:06.019839  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:06.513462  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:07.014271  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:07.514110  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:08.013603  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:08.514705  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:09.013832  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:09.515048  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:10.021368  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:10.514291  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:11.013736  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:11.513270  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:12.013666  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:12.513998  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:13.014873  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:13.514222  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:14.014122  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:14.513767  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:15.016488  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:15.514369  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:16.014656  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:16.513355  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:17.015645  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:17.514412  722185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:16:18.014387  722185 kapi.go:107] duration metric: took 3m20.004322529s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 19:16:18.016110  722185 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-350900 cluster.
	I0916 19:16:18.017417  722185 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 19:16:18.018923  722185 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 19:16:18.021123  722185 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, default-storageclass, volcano, ingress-dns, cloud-spanner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 19:16:18.022637  722185 addons.go:510] duration metric: took 3m32.412293411s for enable addons: enabled=[nvidia-device-plugin storage-provisioner default-storageclass volcano ingress-dns cloud-spanner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 19:16:18.022709  722185 start.go:246] waiting for cluster config update ...
	I0916 19:16:18.022732  722185 start.go:255] writing updated cluster config ...
	I0916 19:16:18.023082  722185 ssh_runner.go:195] Run: rm -f paused
	I0916 19:16:18.392745  722185 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0916 19:16:18.394550  722185 out.go:177] * Done! kubectl is now configured to use "addons-350900" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	33febe36dd0e0       4f725bf50aaa5       21 seconds ago      Exited              gadget                                   6                   26cc9552f51cf       gadget-l5blt
	63a6443b5558a       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   6d3f3f8c3b9b6       gcp-auth-89d5ffd79-rxm46
	82581b30cf7b7       289a818c8d9c5       4 minutes ago       Running             controller                               0                   53655cb35765c       ingress-nginx-controller-bc57996ff-xrxl7
	41fd1e79f4568       8b46b1cd48760       4 minutes ago       Running             admission                                0                   325fe1d561fb2       volcano-admission-77d7d48b68-5mhdx
	01b153c5e29f8       d9c7ad4c226bf       4 minutes ago       Running             volcano-scheduler                        1                   806fbbcd7c489       volcano-scheduler-576bc46687-jqfsm
	09c10b847bde9       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   fed8c08fb9548       csi-hostpathplugin-jnlrh
	26cc350e493cf       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   fed8c08fb9548       csi-hostpathplugin-jnlrh
	568aa5f229410       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   fed8c08fb9548       csi-hostpathplugin-jnlrh
	7920c39e13a99       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   fed8c08fb9548       csi-hostpathplugin-jnlrh
	7f96668bdb8c4       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   fed8c08fb9548       csi-hostpathplugin-jnlrh
	45cffef4cde57       420193b27261a       5 minutes ago       Exited              patch                                    2                   68e9bc8edfa19       ingress-nginx-admission-patch-6h4jw
	b8011a3011336       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   52cf1b94321e4       csi-hostpath-attacher-0
	b0a52b156c49d       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   4ef2ea6499267       snapshot-controller-56fcc65765-b587t
	f85c1a7556e41       420193b27261a       5 minutes ago       Exited              create                                   0                   aa4a92e9dfdef       ingress-nginx-admission-create-6pph2
	b2d7015e6eccc       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   a843145f82358       local-path-provisioner-86d989889c-gnc7b
	19e69c8a099f2       d9c7ad4c226bf       5 minutes ago       Exited              volcano-scheduler                        0                   806fbbcd7c489       volcano-scheduler-576bc46687-jqfsm
	f9eb8f0f31973       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   24f3f963b2c88       csi-hostpath-resizer-0
	68c9896993f21       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   ff035be4e81f4       volcano-controllers-56675bb4d5-fpbw7
	b10bea78bad76       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   67a2af0746224       metrics-server-84c5f94fbc-vp94l
	f879d07743599       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   fed8c08fb9548       csi-hostpathplugin-jnlrh
	574b215d0c3ff       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   96508305b7872       snapshot-controller-56fcc65765-pwln9
	3bd34747851c7       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   0d2ff1a264d33       registry-proxy-ttbjb
	b7f576ed0f2d7       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   828691330c39a       registry-66c9cd494c-fvpw8
	ceb4f0d241c49       77bdba588b953       5 minutes ago       Running             yakd                                     0                   886f41fb48597       yakd-dashboard-67d98fc6b-khgpx
	f9009859a01f0       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   88fb9b0295e29       nvidia-device-plugin-daemonset-4vbhs
	00a3f9781b988       8be4bcf8ec607       6 minutes ago       Running             cloud-spanner-emulator                   0                   a4cc5c563fce4       cloud-spanner-emulator-769b77f747-dx8wr
	d30e7ee04878f       2f6c962e7b831       6 minutes ago       Running             coredns                                  0                   5bde9b08a04c0       coredns-7c65d6cfc9-bclls
	82fb39c145bf1       35508c2f890c4       6 minutes ago       Running             minikube-ingress-dns                     0                   da704bef79542       kube-ingress-dns-minikube
	efb61ee827e62       ba04bb24b9575       6 minutes ago       Running             storage-provisioner                      0                   ab98561686dae       storage-provisioner
	f80a8c7ad4e82       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   9adea8167867a       kube-proxy-5q9cd
	d97d888fcdd04       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   91517b7166283       kindnet-pcskg
	56519616ba8f6       27e3830e14027       7 minutes ago       Running             etcd                                     0                   76be07393791d       etcd-addons-350900
	1224c0a8accc8       7f8aa378bb47d       7 minutes ago       Running             kube-scheduler                           0                   e6eb18bf53666       kube-scheduler-addons-350900
	c1eb8ca4fb41a       279f381cb3736       7 minutes ago       Running             kube-controller-manager                  0                   96f0f8cebef02       kube-controller-manager-addons-350900
	3031e076eddda       d3f53a98c0a9d       7 minutes ago       Running             kube-apiserver                           0                   dadce8dda1991       kube-apiserver-addons-350900
	
	
	==> containerd <==
	Sep 16 19:16:40 addons-350900 containerd[812]: time="2024-09-16T19:16:40.858933343Z" level=info msg="TearDown network for sandbox \"5775ef2604ce504f8c96ecaf872709f294eee9bcebcbcea75f7609fa761b8140\" successfully"
	Sep 16 19:16:40 addons-350900 containerd[812]: time="2024-09-16T19:16:40.858973744Z" level=info msg="StopPodSandbox for \"5775ef2604ce504f8c96ecaf872709f294eee9bcebcbcea75f7609fa761b8140\" returns successfully"
	Sep 16 19:16:40 addons-350900 containerd[812]: time="2024-09-16T19:16:40.859623987Z" level=info msg="RemovePodSandbox for \"5775ef2604ce504f8c96ecaf872709f294eee9bcebcbcea75f7609fa761b8140\""
	Sep 16 19:16:40 addons-350900 containerd[812]: time="2024-09-16T19:16:40.859763832Z" level=info msg="Forcibly stopping sandbox \"5775ef2604ce504f8c96ecaf872709f294eee9bcebcbcea75f7609fa761b8140\""
	Sep 16 19:16:40 addons-350900 containerd[812]: time="2024-09-16T19:16:40.869019235Z" level=info msg="TearDown network for sandbox \"5775ef2604ce504f8c96ecaf872709f294eee9bcebcbcea75f7609fa761b8140\" successfully"
	Sep 16 19:16:40 addons-350900 containerd[812]: time="2024-09-16T19:16:40.875696592Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5775ef2604ce504f8c96ecaf872709f294eee9bcebcbcea75f7609fa761b8140\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 16 19:16:40 addons-350900 containerd[812]: time="2024-09-16T19:16:40.875948541Z" level=info msg="RemovePodSandbox \"5775ef2604ce504f8c96ecaf872709f294eee9bcebcbcea75f7609fa761b8140\" returns successfully"
	Sep 16 19:19:14 addons-350900 containerd[812]: time="2024-09-16T19:19:14.764106610Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 16 19:19:14 addons-350900 containerd[812]: time="2024-09-16T19:19:14.897496428Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 16 19:19:14 addons-350900 containerd[812]: time="2024-09-16T19:19:14.898673002Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 16 19:19:14 addons-350900 containerd[812]: time="2024-09-16T19:19:14.902400513Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 138.241868ms"
	Sep 16 19:19:14 addons-350900 containerd[812]: time="2024-09-16T19:19:14.902446862Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 16 19:19:14 addons-350900 containerd[812]: time="2024-09-16T19:19:14.904658622Z" level=info msg="CreateContainer within sandbox \"26cc9552f51cf0849df510151fcf7d7bac7066652b54780ba76ca5628ad327f5\" for container &ContainerMetadata{Name:gadget,Attempt:6,}"
	Sep 16 19:19:14 addons-350900 containerd[812]: time="2024-09-16T19:19:14.921819848Z" level=info msg="CreateContainer within sandbox \"26cc9552f51cf0849df510151fcf7d7bac7066652b54780ba76ca5628ad327f5\" for &ContainerMetadata{Name:gadget,Attempt:6,} returns container id \"33febe36dd0e0df42c314ba8b89fcb4f0d39b5381077d829a78bb5f0c6a1452f\""
	Sep 16 19:19:14 addons-350900 containerd[812]: time="2024-09-16T19:19:14.922537404Z" level=info msg="StartContainer for \"33febe36dd0e0df42c314ba8b89fcb4f0d39b5381077d829a78bb5f0c6a1452f\""
	Sep 16 19:19:14 addons-350900 containerd[812]: time="2024-09-16T19:19:14.974669896Z" level=info msg="StartContainer for \"33febe36dd0e0df42c314ba8b89fcb4f0d39b5381077d829a78bb5f0c6a1452f\" returns successfully"
	Sep 16 19:19:16 addons-350900 containerd[812]: time="2024-09-16T19:19:16.426915655Z" level=error msg="ExecSync for \"33febe36dd0e0df42c314ba8b89fcb4f0d39b5381077d829a78bb5f0c6a1452f\" failed" error="failed to exec in container: failed to start exec \"cb5ffeb4ce95cd39dd78ee6cbb5dcae2c47ffd685ba9eeae1ae369e6330730b3\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 16 19:19:16 addons-350900 containerd[812]: time="2024-09-16T19:19:16.446715552Z" level=error msg="ExecSync for \"33febe36dd0e0df42c314ba8b89fcb4f0d39b5381077d829a78bb5f0c6a1452f\" failed" error="failed to exec in container: failed to start exec \"f3d869a4817cca65d874efc78b0bb817dd5be207009cc83d12e903de05f4e805\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 16 19:19:16 addons-350900 containerd[812]: time="2024-09-16T19:19:16.466597311Z" level=error msg="ExecSync for \"33febe36dd0e0df42c314ba8b89fcb4f0d39b5381077d829a78bb5f0c6a1452f\" failed" error="failed to exec in container: failed to start exec \"482b0d5ba4d88401cabfefc50548e696a69abee96bc34adfe969ac6654ca9325\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 16 19:19:16 addons-350900 containerd[812]: time="2024-09-16T19:19:16.575487262Z" level=info msg="shim disconnected" id=33febe36dd0e0df42c314ba8b89fcb4f0d39b5381077d829a78bb5f0c6a1452f namespace=k8s.io
	Sep 16 19:19:16 addons-350900 containerd[812]: time="2024-09-16T19:19:16.575546321Z" level=warning msg="cleaning up after shim disconnected" id=33febe36dd0e0df42c314ba8b89fcb4f0d39b5381077d829a78bb5f0c6a1452f namespace=k8s.io
	Sep 16 19:19:16 addons-350900 containerd[812]: time="2024-09-16T19:19:16.575556348Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 19:19:16 addons-350900 containerd[812]: time="2024-09-16T19:19:16.589072654Z" level=warning msg="cleanup warnings time=\"2024-09-16T19:19:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Sep 16 19:19:17 addons-350900 containerd[812]: time="2024-09-16T19:19:17.245426190Z" level=info msg="RemoveContainer for \"d928f0fa194c219b39b0112d664820ad9eee2e649abe451899a9de1e3a6d771a\""
	Sep 16 19:19:17 addons-350900 containerd[812]: time="2024-09-16T19:19:17.250726459Z" level=info msg="RemoveContainer for \"d928f0fa194c219b39b0112d664820ad9eee2e649abe451899a9de1e3a6d771a\" returns successfully"
	
	
	==> coredns [d30e7ee04878f095df5090548ba1655b1013ca8aa1339d484885371223124786] <==
	[INFO] 10.244.0.6:57920 - 14140 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000076758s
	[INFO] 10.244.0.6:38196 - 29610 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00290129s
	[INFO] 10.244.0.6:38196 - 13224 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002940206s
	[INFO] 10.244.0.6:59178 - 41888 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000147377s
	[INFO] 10.244.0.6:59178 - 63647 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092191s
	[INFO] 10.244.0.6:58508 - 27 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000121064s
	[INFO] 10.244.0.6:58508 - 40545 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000517s
	[INFO] 10.244.0.6:36152 - 34992 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000075847s
	[INFO] 10.244.0.6:36152 - 7349 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000194769s
	[INFO] 10.244.0.6:40917 - 26990 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083166s
	[INFO] 10.244.0.6:40917 - 20332 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039539s
	[INFO] 10.244.0.6:60256 - 32628 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002922648s
	[INFO] 10.244.0.6:60256 - 46711 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001622129s
	[INFO] 10.244.0.6:44719 - 57845 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000156567s
	[INFO] 10.244.0.6:44719 - 2553 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000085915s
	[INFO] 10.244.0.24:46770 - 34110 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188099s
	[INFO] 10.244.0.24:60074 - 53240 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000093783s
	[INFO] 10.244.0.24:59271 - 3771 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000175611s
	[INFO] 10.244.0.24:52501 - 37105 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000065705s
	[INFO] 10.244.0.24:51296 - 6829 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086021s
	[INFO] 10.244.0.24:60418 - 61564 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095211s
	[INFO] 10.244.0.24:58851 - 15003 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.007715144s
	[INFO] 10.244.0.24:38797 - 42432 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.006268332s
	[INFO] 10.244.0.24:56900 - 31470 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002289002s
	[INFO] 10.244.0.24:52370 - 53296 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002480596s
	
	
	==> describe nodes <==
	Name:               addons-350900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-350900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=addons-350900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T19_12_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-350900
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-350900"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 19:12:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-350900
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 19:19:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 19:16:46 +0000   Mon, 16 Sep 2024 19:12:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 19:16:46 +0000   Mon, 16 Sep 2024 19:12:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 19:16:46 +0000   Mon, 16 Sep 2024 19:12:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 19:16:46 +0000   Mon, 16 Sep 2024 19:12:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-350900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 9972adef9b7249f1ac78c0e181dd9749
	  System UUID:                e38b201b-5fa6-4046-8249-8ac7efa694b5
	  Boot ID:                    486805ab-1132-42a1-beb7-17af684154aa
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-dx8wr     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  gadget                      gadget-l5blt                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
	  gcp-auth                    gcp-auth-89d5ffd79-rxm46                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-xrxl7    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m44s
	  kube-system                 coredns-7c65d6cfc9-bclls                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m51s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 csi-hostpathplugin-jnlrh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 etcd-addons-350900                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m56s
	  kube-system                 kindnet-pcskg                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m52s
	  kube-system                 kube-apiserver-addons-350900                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 kube-controller-manager-addons-350900       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m47s
	  kube-system                 kube-proxy-5q9cd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m52s
	  kube-system                 kube-scheduler-addons-350900                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 metrics-server-84c5f94fbc-vp94l             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m46s
	  kube-system                 nvidia-device-plugin-daemonset-4vbhs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m49s
	  kube-system                 registry-66c9cd494c-fvpw8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 registry-proxy-ttbjb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 snapshot-controller-56fcc65765-b587t        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
	  kube-system                 snapshot-controller-56fcc65765-pwln9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m47s
	  local-path-storage          local-path-provisioner-86d989889c-gnc7b     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  volcano-system              volcano-admission-77d7d48b68-5mhdx          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  volcano-system              volcano-controllers-56675bb4d5-fpbw7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  volcano-system              volcano-scheduler-576bc46687-jqfsm          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-khgpx              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     6m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 6m50s  kube-proxy       
	  Normal   Starting                 6m57s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m57s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m57s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m56s  kubelet          Node addons-350900 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m56s  kubelet          Node addons-350900 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m56s  kubelet          Node addons-350900 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m53s  node-controller  Node addons-350900 event: Registered Node addons-350900 in Controller
	
	
	==> dmesg <==
	[Sep16 18:44] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [56519616ba8f6b4ea87872edb254cde85564d87fd860cbfc950f3e528257cee7] <==
	{"level":"info","ts":"2024-09-16T19:12:34.911945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-16T19:12:34.911996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T19:12:34.912034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T19:12:34.912077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-16T19:12:34.912112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T19:12:34.912171Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T19:12:34.912208Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T19:12:34.912499Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T19:12:34.912856Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T19:12:34.912990Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T19:12:34.913209Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T19:12:34.913289Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T19:12:34.919497Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-350900 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T19:12:34.919678Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T19:12:34.919783Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T19:12:34.920162Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T19:12:34.923376Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T19:12:34.923418Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T19:12:34.923453Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T19:12:34.923537Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T19:12:34.923573Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T19:12:34.924319Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T19:12:34.928336Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T19:12:34.924319Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T19:12:34.943823Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [63a6443b5558ae37f93d16ae6b6cf1d13544f0e284ffc8b04fc959a4a8338b3a] <==
	2024/09/16 19:16:17 GCP Auth Webhook started!
	2024/09/16 19:16:34 Ready to marshal response ...
	2024/09/16 19:16:34 Ready to write response ...
	2024/09/16 19:16:35 Ready to marshal response ...
	2024/09/16 19:16:35 Ready to write response ...
	
	
	==> kernel <==
	 19:19:37 up  3:01,  0 users,  load average: 0.47, 1.22, 2.02
	Linux addons-350900 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [d97d888fcdd0467201dd9a2d651acd3782305534d881b7df88d6aa02ccba7f8d] <==
	I0916 19:17:37.227718       1 main.go:299] handling current node
	I0916 19:17:47.219854       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 19:17:47.219982       1 main.go:299] handling current node
	I0916 19:17:57.225529       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 19:17:57.225569       1 main.go:299] handling current node
	I0916 19:18:07.228505       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 19:18:07.228698       1 main.go:299] handling current node
	I0916 19:18:17.226017       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 19:18:17.226141       1 main.go:299] handling current node
	I0916 19:18:27.225989       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 19:18:27.226025       1 main.go:299] handling current node
	I0916 19:18:37.220580       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 19:18:37.220680       1 main.go:299] handling current node
	I0916 19:18:47.220182       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 19:18:47.220291       1 main.go:299] handling current node
	I0916 19:18:57.225479       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 19:18:57.225574       1 main.go:299] handling current node
	I0916 19:19:07.220990       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 19:19:07.221026       1 main.go:299] handling current node
	I0916 19:19:17.219902       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 19:19:17.219941       1 main.go:299] handling current node
	I0916 19:19:27.225720       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 19:19:27.225756       1 main.go:299] handling current node
	I0916 19:19:37.227648       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 19:19:37.227677       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3031e076edddadf512c6c2a304d990ee08bee7315ce05417c78c42f3c65a0d19] <==
	W0916 19:15:00.863774       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:01.906466       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:03.006166       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:04.081359       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:04.820077       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:05.896123       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:06.988044       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:08.041921       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:09.064355       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:10.105550       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:11.143973       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:12.232834       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:13.281235       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:14.365743       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:15.390934       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:16.483683       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:17.532478       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.84.124:443: connect: connection refused
	W0916 19:15:42.723377       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.103.247:443: connect: connection refused
	E0916 19:15:42.723425       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.103.247:443: connect: connection refused" logger="UnhandledError"
	W0916 19:16:00.759558       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.103.247:443: connect: connection refused
	E0916 19:16:00.759597       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.103.247:443: connect: connection refused" logger="UnhandledError"
	W0916 19:16:00.842604       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.103.247:443: connect: connection refused
	E0916 19:16:00.842646       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.103.247:443: connect: connection refused" logger="UnhandledError"
	I0916 19:16:34.910922       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0916 19:16:34.950251       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [c1eb8ca4fb41a4642bee803140d6f2e1fc651544aa04053e2c5ecaacae93fcc9] <==
	I0916 19:16:00.779834       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 19:16:00.790921       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 19:16:00.805707       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 19:16:00.852623       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 19:16:00.858328       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 19:16:00.868703       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 19:16:00.875616       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 19:16:01.728190       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 19:16:01.757723       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 19:16:02.828160       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 19:16:02.856622       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 19:16:03.834787       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 19:16:03.841313       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 19:16:03.848236       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 19:16:03.863800       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 19:16:03.872051       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 19:16:03.879590       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 19:16:17.764305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="10.901966ms"
	I0916 19:16:17.765816       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="51.609µs"
	I0916 19:16:33.024163       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 19:16:33.031609       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 19:16:33.067364       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 19:16:33.067722       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 19:16:34.629360       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0916 19:16:46.031453       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-350900"
	
	
	==> kube-proxy [f80a8c7ad4e82b9c7432c360590d25fc331503692f709b9e97145e5f14820f0d] <==
	I0916 19:12:46.905672       1 server_linux.go:66] "Using iptables proxy"
	I0916 19:12:47.023197       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 19:12:47.023275       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 19:12:47.062630       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 19:12:47.062697       1 server_linux.go:169] "Using iptables Proxier"
	I0916 19:12:47.064621       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 19:12:47.064923       1 server.go:483] "Version info" version="v1.31.1"
	I0916 19:12:47.064959       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 19:12:47.069789       1 config.go:199] "Starting service config controller"
	I0916 19:12:47.069826       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 19:12:47.069875       1 config.go:105] "Starting endpoint slice config controller"
	I0916 19:12:47.069884       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 19:12:47.073138       1 config.go:328] "Starting node config controller"
	I0916 19:12:47.073183       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 19:12:47.171433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 19:12:47.171484       1 shared_informer.go:320] Caches are synced for service config
	I0916 19:12:47.173536       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1224c0a8accc89ce7d9f765c8c28cba90148a319fcb5fd6e41e9ce94f73be543] <==
	W0916 19:12:38.568375       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 19:12:38.568706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:12:38.568450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 19:12:38.568752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 19:12:38.568510       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 19:12:38.568782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 19:12:38.568932       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 19:12:38.568956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 19:12:38.569057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 19:12:38.569276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 19:12:38.569149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 19:12:38.569308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:12:38.569248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 19:12:38.569329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 19:12:38.569464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 19:12:38.569488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 19:12:38.569570       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 19:12:38.569592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:12:38.569706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 19:12:38.569793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:12:39.389307       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 19:12:39.389557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 19:12:39.443903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 19:12:39.444016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 19:12:40.253194       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 19:17:50 addons-350900 kubelet[1468]: E0916 19:17:50.764622    1468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-l5blt_gadget(db32689d-8dbb-4ef1-9bb0-f4933889f4e1)\"" pod="gadget/gadget-l5blt" podUID="db32689d-8dbb-4ef1-9bb0-f4933889f4e1"
	Sep 16 19:18:05 addons-350900 kubelet[1468]: I0916 19:18:05.763535    1468 scope.go:117] "RemoveContainer" containerID="d928f0fa194c219b39b0112d664820ad9eee2e649abe451899a9de1e3a6d771a"
	Sep 16 19:18:05 addons-350900 kubelet[1468]: E0916 19:18:05.763757    1468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-l5blt_gadget(db32689d-8dbb-4ef1-9bb0-f4933889f4e1)\"" pod="gadget/gadget-l5blt" podUID="db32689d-8dbb-4ef1-9bb0-f4933889f4e1"
	Sep 16 19:18:19 addons-350900 kubelet[1468]: I0916 19:18:19.762986    1468 scope.go:117] "RemoveContainer" containerID="d928f0fa194c219b39b0112d664820ad9eee2e649abe451899a9de1e3a6d771a"
	Sep 16 19:18:19 addons-350900 kubelet[1468]: E0916 19:18:19.763191    1468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-l5blt_gadget(db32689d-8dbb-4ef1-9bb0-f4933889f4e1)\"" pod="gadget/gadget-l5blt" podUID="db32689d-8dbb-4ef1-9bb0-f4933889f4e1"
	Sep 16 19:18:33 addons-350900 kubelet[1468]: I0916 19:18:33.762628    1468 scope.go:117] "RemoveContainer" containerID="d928f0fa194c219b39b0112d664820ad9eee2e649abe451899a9de1e3a6d771a"
	Sep 16 19:18:33 addons-350900 kubelet[1468]: E0916 19:18:33.762850    1468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-l5blt_gadget(db32689d-8dbb-4ef1-9bb0-f4933889f4e1)\"" pod="gadget/gadget-l5blt" podUID="db32689d-8dbb-4ef1-9bb0-f4933889f4e1"
	Sep 16 19:18:40 addons-350900 kubelet[1468]: I0916 19:18:40.763911    1468 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-fvpw8" secret="" err="secret \"gcp-auth\" not found"
	Sep 16 19:18:40 addons-350900 kubelet[1468]: I0916 19:18:40.764698    1468 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-ttbjb" secret="" err="secret \"gcp-auth\" not found"
	Sep 16 19:18:45 addons-350900 kubelet[1468]: I0916 19:18:45.762883    1468 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-4vbhs" secret="" err="secret \"gcp-auth\" not found"
	Sep 16 19:18:48 addons-350900 kubelet[1468]: I0916 19:18:48.763578    1468 scope.go:117] "RemoveContainer" containerID="d928f0fa194c219b39b0112d664820ad9eee2e649abe451899a9de1e3a6d771a"
	Sep 16 19:18:48 addons-350900 kubelet[1468]: E0916 19:18:48.763772    1468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-l5blt_gadget(db32689d-8dbb-4ef1-9bb0-f4933889f4e1)\"" pod="gadget/gadget-l5blt" podUID="db32689d-8dbb-4ef1-9bb0-f4933889f4e1"
	Sep 16 19:19:00 addons-350900 kubelet[1468]: I0916 19:19:00.763338    1468 scope.go:117] "RemoveContainer" containerID="d928f0fa194c219b39b0112d664820ad9eee2e649abe451899a9de1e3a6d771a"
	Sep 16 19:19:00 addons-350900 kubelet[1468]: E0916 19:19:00.763531    1468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-l5blt_gadget(db32689d-8dbb-4ef1-9bb0-f4933889f4e1)\"" pod="gadget/gadget-l5blt" podUID="db32689d-8dbb-4ef1-9bb0-f4933889f4e1"
	Sep 16 19:19:14 addons-350900 kubelet[1468]: I0916 19:19:14.762672    1468 scope.go:117] "RemoveContainer" containerID="d928f0fa194c219b39b0112d664820ad9eee2e649abe451899a9de1e3a6d771a"
	Sep 16 19:19:16 addons-350900 kubelet[1468]: E0916 19:19:16.427253    1468 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"cb5ffeb4ce95cd39dd78ee6cbb5dcae2c47ffd685ba9eeae1ae369e6330730b3\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" containerID="33febe36dd0e0df42c314ba8b89fcb4f0d39b5381077d829a78bb5f0c6a1452f" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 16 19:19:16 addons-350900 kubelet[1468]: E0916 19:19:16.447171    1468 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"f3d869a4817cca65d874efc78b0bb817dd5be207009cc83d12e903de05f4e805\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" containerID="33febe36dd0e0df42c314ba8b89fcb4f0d39b5381077d829a78bb5f0c6a1452f" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 16 19:19:16 addons-350900 kubelet[1468]: E0916 19:19:16.466844    1468 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"482b0d5ba4d88401cabfefc50548e696a69abee96bc34adfe969ac6654ca9325\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" containerID="33febe36dd0e0df42c314ba8b89fcb4f0d39b5381077d829a78bb5f0c6a1452f" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 16 19:19:17 addons-350900 kubelet[1468]: I0916 19:19:17.242057    1468 scope.go:117] "RemoveContainer" containerID="d928f0fa194c219b39b0112d664820ad9eee2e649abe451899a9de1e3a6d771a"
	Sep 16 19:19:17 addons-350900 kubelet[1468]: I0916 19:19:17.242480    1468 scope.go:117] "RemoveContainer" containerID="33febe36dd0e0df42c314ba8b89fcb4f0d39b5381077d829a78bb5f0c6a1452f"
	Sep 16 19:19:17 addons-350900 kubelet[1468]: E0916 19:19:17.242690    1468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-l5blt_gadget(db32689d-8dbb-4ef1-9bb0-f4933889f4e1)\"" pod="gadget/gadget-l5blt" podUID="db32689d-8dbb-4ef1-9bb0-f4933889f4e1"
	Sep 16 19:19:18 addons-350900 kubelet[1468]: I0916 19:19:18.246269    1468 scope.go:117] "RemoveContainer" containerID="33febe36dd0e0df42c314ba8b89fcb4f0d39b5381077d829a78bb5f0c6a1452f"
	Sep 16 19:19:18 addons-350900 kubelet[1468]: E0916 19:19:18.246455    1468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-l5blt_gadget(db32689d-8dbb-4ef1-9bb0-f4933889f4e1)\"" pod="gadget/gadget-l5blt" podUID="db32689d-8dbb-4ef1-9bb0-f4933889f4e1"
	Sep 16 19:19:30 addons-350900 kubelet[1468]: I0916 19:19:30.764704    1468 scope.go:117] "RemoveContainer" containerID="33febe36dd0e0df42c314ba8b89fcb4f0d39b5381077d829a78bb5f0c6a1452f"
	Sep 16 19:19:30 addons-350900 kubelet[1468]: E0916 19:19:30.765453    1468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-l5blt_gadget(db32689d-8dbb-4ef1-9bb0-f4933889f4e1)\"" pod="gadget/gadget-l5blt" podUID="db32689d-8dbb-4ef1-9bb0-f4933889f4e1"
	
	
	==> storage-provisioner [efb61ee827e6252daee89a43246751968671d68df0a95e3b74d8a96782e3dedb] <==
	I0916 19:12:51.370143       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 19:12:51.424402       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 19:12:51.444085       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 19:12:51.479479       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 19:12:51.482639       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-350900_0c54bc39-1b47-473b-98ad-35bbedc700fc!
	I0916 19:12:51.484429       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18329ba5-25a4-4f4c-9d9e-6e05c332fc02", APIVersion:"v1", ResourceVersion:"544", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-350900_0c54bc39-1b47-473b-98ad-35bbedc700fc became leader
	I0916 19:12:51.583519       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-350900_0c54bc39-1b47-473b-98ad-35bbedc700fc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-350900 -n addons-350900
helpers_test.go:261: (dbg) Run:  kubectl --context addons-350900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-6pph2 ingress-nginx-admission-patch-6h4jw test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-350900 describe pod ingress-nginx-admission-create-6pph2 ingress-nginx-admission-patch-6h4jw test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-350900 describe pod ingress-nginx-admission-create-6pph2 ingress-nginx-admission-patch-6h4jw test-job-nginx-0: exit status 1 (93.176807ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6pph2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6h4jw" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-350900 describe pod ingress-nginx-admission-create-6pph2 ingress-nginx-admission-patch-6h4jw test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (382.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-908284 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0916 20:03:35.058221  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-908284 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m17.947250268s)

                                                
                                                
-- stdout --
	* [old-k8s-version-908284] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-908284" primary control-plane node in "old-k8s-version-908284" cluster
	* Pulling base image v0.0.45-1726481311-19649 ...
	* Restarting existing docker container for "old-k8s-version-908284" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-908284 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 20:03:24.718140  929978 out.go:345] Setting OutFile to fd 1 ...
	I0916 20:03:24.718330  929978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 20:03:24.718342  929978 out.go:358] Setting ErrFile to fd 2...
	I0916 20:03:24.718348  929978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 20:03:24.718578  929978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
	I0916 20:03:24.718962  929978 out.go:352] Setting JSON to false
	I0916 20:03:24.720300  929978 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":13518,"bootTime":1726503487,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0916 20:03:24.720377  929978 start.go:139] virtualization:  
	I0916 20:03:24.723577  929978 out.go:177] * [old-k8s-version-908284] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0916 20:03:24.726952  929978 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 20:03:24.727093  929978 notify.go:220] Checking for updates...
	I0916 20:03:24.732399  929978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 20:03:24.735193  929978 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	I0916 20:03:24.737819  929978 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	I0916 20:03:24.740458  929978 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0916 20:03:24.743139  929978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 20:03:24.746332  929978 config.go:182] Loaded profile config "old-k8s-version-908284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 20:03:24.749431  929978 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 20:03:24.751956  929978 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 20:03:24.778666  929978 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 20:03:24.778809  929978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 20:03:24.832436  929978 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 20:03:24.823357378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 20:03:24.832551  929978 docker.go:318] overlay module found
	I0916 20:03:24.835387  929978 out.go:177] * Using the docker driver based on existing profile
	I0916 20:03:24.838150  929978 start.go:297] selected driver: docker
	I0916 20:03:24.838172  929978 start.go:901] validating driver "docker" against &{Name:old-k8s-version-908284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908284 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 20:03:24.838294  929978 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 20:03:24.838927  929978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 20:03:24.902781  929978 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 20:03:24.893621197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 20:03:24.903160  929978 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 20:03:24.903202  929978 cni.go:84] Creating CNI manager for ""
	I0916 20:03:24.903253  929978 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 20:03:24.903299  929978 start.go:340] cluster config:
	{Name:old-k8s-version-908284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 20:03:24.906488  929978 out.go:177] * Starting "old-k8s-version-908284" primary control-plane node in "old-k8s-version-908284" cluster
	I0916 20:03:24.909443  929978 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 20:03:24.912410  929978 out.go:177] * Pulling base image v0.0.45-1726481311-19649 ...
	I0916 20:03:24.915442  929978 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0916 20:03:24.915478  929978 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local docker daemon
	I0916 20:03:24.915496  929978 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-716050/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0916 20:03:24.915506  929978 cache.go:56] Caching tarball of preloaded images
	I0916 20:03:24.915599  929978 preload.go:172] Found /home/jenkins/minikube-integration/19649-716050/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 20:03:24.915609  929978 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0916 20:03:24.915741  929978 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/config.json ...
	W0916 20:03:24.935423  929978 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc is of wrong architecture
	I0916 20:03:24.935442  929978 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc to local cache
	I0916 20:03:24.935516  929978 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory
	I0916 20:03:24.935533  929978 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory, skipping pull
	I0916 20:03:24.935538  929978 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc exists in cache, skipping pull
	I0916 20:03:24.935546  929978 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc as a tarball
	I0916 20:03:24.935551  929978 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc from local cache
	I0916 20:03:25.054874  929978 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc from cached tarball
	I0916 20:03:25.054914  929978 cache.go:194] Successfully downloaded all kic artifacts
	I0916 20:03:25.054946  929978 start.go:360] acquireMachinesLock for old-k8s-version-908284: {Name:mkeedf0ec4c438d39e8041ec6166425ab66e14bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 20:03:25.055036  929978 start.go:364] duration metric: took 67.838µs to acquireMachinesLock for "old-k8s-version-908284"
	I0916 20:03:25.055069  929978 start.go:96] Skipping create...Using existing machine configuration
	I0916 20:03:25.055078  929978 fix.go:54] fixHost starting: 
	I0916 20:03:25.055417  929978 cli_runner.go:164] Run: docker container inspect old-k8s-version-908284 --format={{.State.Status}}
	I0916 20:03:25.072620  929978 fix.go:112] recreateIfNeeded on old-k8s-version-908284: state=Stopped err=<nil>
	W0916 20:03:25.072662  929978 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 20:03:25.075756  929978 out.go:177] * Restarting existing docker container for "old-k8s-version-908284" ...
	I0916 20:03:25.078430  929978 cli_runner.go:164] Run: docker start old-k8s-version-908284
	I0916 20:03:25.371274  929978 cli_runner.go:164] Run: docker container inspect old-k8s-version-908284 --format={{.State.Status}}
	I0916 20:03:25.394750  929978 kic.go:430] container "old-k8s-version-908284" state is running.
	I0916 20:03:25.395234  929978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-908284
	I0916 20:03:25.421778  929978 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/config.json ...
	I0916 20:03:25.422024  929978 machine.go:93] provisionDockerMachine start ...
	I0916 20:03:25.422079  929978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908284
	I0916 20:03:25.447646  929978 main.go:141] libmachine: Using SSH client type: native
	I0916 20:03:25.447980  929978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I0916 20:03:25.447992  929978 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 20:03:25.448632  929978 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60676->127.0.0.1:33827: read: connection reset by peer
	I0916 20:03:28.586685  929978 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-908284
	
	I0916 20:03:28.586710  929978 ubuntu.go:169] provisioning hostname "old-k8s-version-908284"
	I0916 20:03:28.586773  929978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908284
	I0916 20:03:28.603800  929978 main.go:141] libmachine: Using SSH client type: native
	I0916 20:03:28.604097  929978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I0916 20:03:28.604327  929978 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-908284 && echo "old-k8s-version-908284" | sudo tee /etc/hostname
	I0916 20:03:28.755481  929978 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-908284
	
	I0916 20:03:28.755565  929978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908284
	I0916 20:03:28.773579  929978 main.go:141] libmachine: Using SSH client type: native
	I0916 20:03:28.773829  929978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I0916 20:03:28.773855  929978 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-908284' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-908284/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-908284' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 20:03:28.911304  929978 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 20:03:28.911359  929978 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19649-716050/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-716050/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-716050/.minikube}
	I0916 20:03:28.911392  929978 ubuntu.go:177] setting up certificates
	I0916 20:03:28.911402  929978 provision.go:84] configureAuth start
	I0916 20:03:28.911467  929978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-908284
	I0916 20:03:28.928663  929978 provision.go:143] copyHostCerts
	I0916 20:03:28.928739  929978 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-716050/.minikube/ca.pem, removing ...
	I0916 20:03:28.928752  929978 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-716050/.minikube/ca.pem
	I0916 20:03:28.928833  929978 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-716050/.minikube/ca.pem (1082 bytes)
	I0916 20:03:28.928933  929978 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-716050/.minikube/cert.pem, removing ...
	I0916 20:03:28.928943  929978 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-716050/.minikube/cert.pem
	I0916 20:03:28.928972  929978 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-716050/.minikube/cert.pem (1123 bytes)
	I0916 20:03:28.929061  929978 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-716050/.minikube/key.pem, removing ...
	I0916 20:03:28.929066  929978 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-716050/.minikube/key.pem
	I0916 20:03:28.929090  929978 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-716050/.minikube/key.pem (1675 bytes)
	I0916 20:03:28.929143  929978 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-716050/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-908284 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-908284]
	I0916 20:03:29.753645  929978 provision.go:177] copyRemoteCerts
	I0916 20:03:29.753718  929978 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 20:03:29.753762  929978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908284
	I0916 20:03:29.770796  929978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/old-k8s-version-908284/id_rsa Username:docker}
	I0916 20:03:29.868064  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 20:03:29.892633  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0916 20:03:29.915762  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 20:03:29.939580  929978 provision.go:87] duration metric: took 1.028153148s to configureAuth
	I0916 20:03:29.939608  929978 ubuntu.go:193] setting minikube options for container-runtime
	I0916 20:03:29.939799  929978 config.go:182] Loaded profile config "old-k8s-version-908284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 20:03:29.939813  929978 machine.go:96] duration metric: took 4.517781497s to provisionDockerMachine
	I0916 20:03:29.939821  929978 start.go:293] postStartSetup for "old-k8s-version-908284" (driver="docker")
	I0916 20:03:29.939831  929978 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 20:03:29.939882  929978 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 20:03:29.939925  929978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908284
	I0916 20:03:29.956607  929978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/old-k8s-version-908284/id_rsa Username:docker}
	I0916 20:03:30.062571  929978 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 20:03:30.066339  929978 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 20:03:30.066379  929978 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 20:03:30.066390  929978 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 20:03:30.066398  929978 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 20:03:30.066409  929978 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-716050/.minikube/addons for local assets ...
	I0916 20:03:30.066475  929978 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-716050/.minikube/files for local assets ...
	I0916 20:03:30.066565  929978 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-716050/.minikube/files/etc/ssl/certs/7214282.pem -> 7214282.pem in /etc/ssl/certs
	I0916 20:03:30.066682  929978 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 20:03:30.076978  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/files/etc/ssl/certs/7214282.pem --> /etc/ssl/certs/7214282.pem (1708 bytes)
	I0916 20:03:30.106604  929978 start.go:296] duration metric: took 166.765165ms for postStartSetup
	I0916 20:03:30.106701  929978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 20:03:30.106775  929978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908284
	I0916 20:03:30.126385  929978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/old-k8s-version-908284/id_rsa Username:docker}
	I0916 20:03:30.224183  929978 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 20:03:30.228640  929978 fix.go:56] duration metric: took 5.173553658s for fixHost
	I0916 20:03:30.228666  929978 start.go:83] releasing machines lock for "old-k8s-version-908284", held for 5.173620224s
	I0916 20:03:30.228736  929978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-908284
	I0916 20:03:30.246082  929978 ssh_runner.go:195] Run: cat /version.json
	I0916 20:03:30.246139  929978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908284
	I0916 20:03:30.246399  929978 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 20:03:30.246470  929978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908284
	I0916 20:03:30.268956  929978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/old-k8s-version-908284/id_rsa Username:docker}
	I0916 20:03:30.271628  929978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/old-k8s-version-908284/id_rsa Username:docker}
	I0916 20:03:30.367480  929978 ssh_runner.go:195] Run: systemctl --version
	I0916 20:03:30.539963  929978 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 20:03:30.544594  929978 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 20:03:30.563516  929978 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 20:03:30.563607  929978 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 20:03:30.573032  929978 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 20:03:30.573055  929978 start.go:495] detecting cgroup driver to use...
	I0916 20:03:30.573094  929978 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 20:03:30.573145  929978 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 20:03:30.586994  929978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 20:03:30.599379  929978 docker.go:217] disabling cri-docker service (if available) ...
	I0916 20:03:30.599461  929978 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 20:03:30.613195  929978 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 20:03:30.625596  929978 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 20:03:30.714764  929978 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 20:03:30.804897  929978 docker.go:233] disabling docker service ...
	I0916 20:03:30.804974  929978 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 20:03:30.817639  929978 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 20:03:30.828744  929978 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 20:03:30.918436  929978 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 20:03:30.999207  929978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 20:03:31.013125  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 20:03:31.030896  929978 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0916 20:03:31.040769  929978 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 20:03:31.050925  929978 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 20:03:31.051041  929978 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 20:03:31.061169  929978 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 20:03:31.072106  929978 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 20:03:31.082403  929978 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 20:03:31.093097  929978 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 20:03:31.104013  929978 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 20:03:31.114744  929978 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 20:03:31.123890  929978 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 20:03:31.132713  929978 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 20:03:31.220640  929978 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 20:03:31.392618  929978 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 20:03:31.392708  929978 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 20:03:31.400522  929978 start.go:563] Will wait 60s for crictl version
	I0916 20:03:31.400604  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:03:31.405224  929978 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 20:03:31.454078  929978 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 20:03:31.454168  929978 ssh_runner.go:195] Run: containerd --version
	I0916 20:03:31.481061  929978 ssh_runner.go:195] Run: containerd --version
	I0916 20:03:31.507358  929978 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I0916 20:03:31.509900  929978 cli_runner.go:164] Run: docker network inspect old-k8s-version-908284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 20:03:31.525303  929978 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0916 20:03:31.528680  929978 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 20:03:31.539203  929978 kubeadm.go:883] updating cluster {Name:old-k8s-version-908284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908284 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 20:03:31.539377  929978 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0916 20:03:31.539438  929978 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 20:03:31.576198  929978 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 20:03:31.576223  929978 containerd.go:534] Images already preloaded, skipping extraction
	I0916 20:03:31.576285  929978 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 20:03:31.612501  929978 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 20:03:31.612520  929978 cache_images.go:84] Images are preloaded, skipping loading
	I0916 20:03:31.612528  929978 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0916 20:03:31.612690  929978 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-908284 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 20:03:31.612753  929978 ssh_runner.go:195] Run: sudo crictl info
	I0916 20:03:31.648558  929978 cni.go:84] Creating CNI manager for ""
	I0916 20:03:31.648583  929978 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 20:03:31.648593  929978 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 20:03:31.648648  929978 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-908284 NodeName:old-k8s-version-908284 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 20:03:31.648805  929978 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-908284"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 20:03:31.648884  929978 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 20:03:31.657768  929978 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 20:03:31.657860  929978 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 20:03:31.666816  929978 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0916 20:03:31.684918  929978 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 20:03:31.702827  929978 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0916 20:03:31.720970  929978 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0916 20:03:31.724277  929978 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 20:03:31.735872  929978 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 20:03:31.822139  929978 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 20:03:31.842764  929978 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284 for IP: 192.168.85.2
	I0916 20:03:31.842825  929978 certs.go:194] generating shared ca certs ...
	I0916 20:03:31.842855  929978 certs.go:226] acquiring lock for ca certs: {Name:mk293c0d980623a78c1c8e4e7829d120cb991002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 20:03:31.843029  929978 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-716050/.minikube/ca.key
	I0916 20:03:31.843110  929978 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-716050/.minikube/proxy-client-ca.key
	I0916 20:03:31.843143  929978 certs.go:256] generating profile certs ...
	I0916 20:03:31.843251  929978 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.key
	I0916 20:03:31.843357  929978 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/apiserver.key.71dd10eb
	I0916 20:03:31.843426  929978 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/proxy-client.key
	I0916 20:03:31.843551  929978 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/721428.pem (1338 bytes)
	W0916 20:03:31.843610  929978 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-716050/.minikube/certs/721428_empty.pem, impossibly tiny 0 bytes
	I0916 20:03:31.843637  929978 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 20:03:31.843695  929978 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem (1082 bytes)
	I0916 20:03:31.843738  929978 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/cert.pem (1123 bytes)
	I0916 20:03:31.843792  929978 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/key.pem (1675 bytes)
	I0916 20:03:31.843860  929978 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/files/etc/ssl/certs/7214282.pem (1708 bytes)
	I0916 20:03:31.844500  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 20:03:31.873314  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 20:03:31.902612  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 20:03:31.932296  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 20:03:31.962169  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 20:03:31.997323  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 20:03:32.028480  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 20:03:32.055491  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 20:03:32.082053  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 20:03:32.107396  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/certs/721428.pem --> /usr/share/ca-certificates/721428.pem (1338 bytes)
	I0916 20:03:32.132675  929978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/files/etc/ssl/certs/7214282.pem --> /usr/share/ca-certificates/7214282.pem (1708 bytes)
	I0916 20:03:32.156699  929978 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 20:03:32.174456  929978 ssh_runner.go:195] Run: openssl version
	I0916 20:03:32.180387  929978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/721428.pem && ln -fs /usr/share/ca-certificates/721428.pem /etc/ssl/certs/721428.pem"
	I0916 20:03:32.190375  929978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/721428.pem
	I0916 20:03:32.194048  929978 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 19:23 /usr/share/ca-certificates/721428.pem
	I0916 20:03:32.194116  929978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/721428.pem
	I0916 20:03:32.200921  929978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/721428.pem /etc/ssl/certs/51391683.0"
	I0916 20:03:32.209895  929978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7214282.pem && ln -fs /usr/share/ca-certificates/7214282.pem /etc/ssl/certs/7214282.pem"
	I0916 20:03:32.219764  929978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7214282.pem
	I0916 20:03:32.223471  929978 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 19:23 /usr/share/ca-certificates/7214282.pem
	I0916 20:03:32.223547  929978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7214282.pem
	I0916 20:03:32.231000  929978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7214282.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 20:03:32.240256  929978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 20:03:32.249347  929978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 20:03:32.252936  929978 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 19:12 /usr/share/ca-certificates/minikubeCA.pem
	I0916 20:03:32.252997  929978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 20:03:32.260162  929978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 20:03:32.269622  929978 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 20:03:32.273209  929978 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 20:03:32.279880  929978 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 20:03:32.286663  929978 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 20:03:32.293676  929978 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 20:03:32.300508  929978 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 20:03:32.307732  929978 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 20:03:32.314452  929978 kubeadm.go:392] StartCluster: {Name:old-k8s-version-908284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908284 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 20:03:32.314565  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 20:03:32.314635  929978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 20:03:32.354214  929978 cri.go:89] found id: "29165aa257b4ed82c3bed159074da5f7d4b4358a908e5b0b2f105c538186699c"
	I0916 20:03:32.354243  929978 cri.go:89] found id: "73b796fc3c23c7bdf0b3e94af03866db4c7513e7f70c3a040a2734d0323c37be"
	I0916 20:03:32.354250  929978 cri.go:89] found id: "05471d3e1c31ae8e82b32033906c8b2d9d329a3ea9850acf38ac17d8175331ed"
	I0916 20:03:32.354254  929978 cri.go:89] found id: "f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af"
	I0916 20:03:32.354257  929978 cri.go:89] found id: "b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97"
	I0916 20:03:32.354274  929978 cri.go:89] found id: "34ed62a120d6504514cf022878498c06dd4558aff9f75e87eff0e60b822c82b0"
	I0916 20:03:32.354279  929978 cri.go:89] found id: "29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd"
	I0916 20:03:32.354282  929978 cri.go:89] found id: "e72d9de27e5e47273b9413248459297d25f9204a3d3bdbe871f68a09eed8cc31"
	I0916 20:03:32.354285  929978 cri.go:89] found id: ""
	I0916 20:03:32.354347  929978 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0916 20:03:32.367146  929978 cri.go:116] JSON = null
	W0916 20:03:32.367252  929978 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0916 20:03:32.367468  929978 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 20:03:32.376657  929978 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 20:03:32.376677  929978 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 20:03:32.376768  929978 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 20:03:32.385638  929978 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 20:03:32.386272  929978 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-908284" does not appear in /home/jenkins/minikube-integration/19649-716050/kubeconfig
	I0916 20:03:32.386539  929978 kubeconfig.go:62] /home/jenkins/minikube-integration/19649-716050/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-908284" cluster setting kubeconfig missing "old-k8s-version-908284" context setting]
	I0916 20:03:32.386972  929978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/kubeconfig: {Name:mk8f5a792b67bd8f95cfe5b13b3ce4d720aa03a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 20:03:32.388389  929978 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 20:03:32.397390  929978 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0916 20:03:32.397480  929978 kubeadm.go:597] duration metric: took 20.79582ms to restartPrimaryControlPlane
	I0916 20:03:32.397499  929978 kubeadm.go:394] duration metric: took 83.05649ms to StartCluster
	I0916 20:03:32.397518  929978 settings.go:142] acquiring lock: {Name:mk07aae78f50a6c58469ab3950475223131150bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 20:03:32.397653  929978 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19649-716050/kubeconfig
	I0916 20:03:32.398667  929978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/kubeconfig: {Name:mk8f5a792b67bd8f95cfe5b13b3ce4d720aa03a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 20:03:32.398916  929978 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 20:03:32.399249  929978 config.go:182] Loaded profile config "old-k8s-version-908284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 20:03:32.399437  929978 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 20:03:32.399522  929978 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-908284"
	I0916 20:03:32.399544  929978 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-908284"
	W0916 20:03:32.399551  929978 addons.go:243] addon storage-provisioner should already be in state true
	I0916 20:03:32.399576  929978 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-908284"
	I0916 20:03:32.399588  929978 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-908284"
	I0916 20:03:32.399928  929978 cli_runner.go:164] Run: docker container inspect old-k8s-version-908284 --format={{.State.Status}}
	I0916 20:03:32.400123  929978 host.go:66] Checking if "old-k8s-version-908284" exists ...
	I0916 20:03:32.400354  929978 addons.go:69] Setting dashboard=true in profile "old-k8s-version-908284"
	I0916 20:03:32.400380  929978 addons.go:234] Setting addon dashboard=true in "old-k8s-version-908284"
	W0916 20:03:32.400388  929978 addons.go:243] addon dashboard should already be in state true
	I0916 20:03:32.400415  929978 host.go:66] Checking if "old-k8s-version-908284" exists ...
	I0916 20:03:32.400831  929978 cli_runner.go:164] Run: docker container inspect old-k8s-version-908284 --format={{.State.Status}}
	I0916 20:03:32.400951  929978 cli_runner.go:164] Run: docker container inspect old-k8s-version-908284 --format={{.State.Status}}
	I0916 20:03:32.406868  929978 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-908284"
	I0916 20:03:32.406961  929978 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-908284"
	W0916 20:03:32.406992  929978 addons.go:243] addon metrics-server should already be in state true
	I0916 20:03:32.406886  929978 out.go:177] * Verifying Kubernetes components...
	I0916 20:03:32.407080  929978 host.go:66] Checking if "old-k8s-version-908284" exists ...
	I0916 20:03:32.407642  929978 cli_runner.go:164] Run: docker container inspect old-k8s-version-908284 --format={{.State.Status}}
	I0916 20:03:32.409686  929978 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 20:03:32.431525  929978 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0916 20:03:32.434122  929978 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0916 20:03:32.436993  929978 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0916 20:03:32.437023  929978 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0916 20:03:32.437165  929978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908284
	I0916 20:03:32.447307  929978 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-908284"
	W0916 20:03:32.447358  929978 addons.go:243] addon default-storageclass should already be in state true
	I0916 20:03:32.447384  929978 host.go:66] Checking if "old-k8s-version-908284" exists ...
	I0916 20:03:32.447880  929978 cli_runner.go:164] Run: docker container inspect old-k8s-version-908284 --format={{.State.Status}}
	I0916 20:03:32.468892  929978 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 20:03:32.471664  929978 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 20:03:32.471686  929978 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 20:03:32.471758  929978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908284
	I0916 20:03:32.491898  929978 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0916 20:03:32.495548  929978 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 20:03:32.495588  929978 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 20:03:32.495654  929978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908284
	I0916 20:03:32.516705  929978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/old-k8s-version-908284/id_rsa Username:docker}
	I0916 20:03:32.553211  929978 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 20:03:32.553237  929978 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 20:03:32.553298  929978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908284
	I0916 20:03:32.553656  929978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/old-k8s-version-908284/id_rsa Username:docker}
	I0916 20:03:32.563648  929978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/old-k8s-version-908284/id_rsa Username:docker}
	I0916 20:03:32.587778  929978 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 20:03:32.589948  929978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/old-k8s-version-908284/id_rsa Username:docker}
	I0916 20:03:32.623751  929978 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-908284" to be "Ready" ...
	I0916 20:03:32.663625  929978 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0916 20:03:32.663652  929978 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0916 20:03:32.682799  929978 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0916 20:03:32.682823  929978 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0916 20:03:32.697476  929978 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 20:03:32.697501  929978 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0916 20:03:32.704994  929978 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0916 20:03:32.705016  929978 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0916 20:03:32.733012  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 20:03:32.735234  929978 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 20:03:32.735255  929978 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 20:03:32.740986  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 20:03:32.752249  929978 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0916 20:03:32.752276  929978 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0916 20:03:32.777877  929978 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 20:03:32.777917  929978 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 20:03:32.793768  929978 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0916 20:03:32.793794  929978 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0916 20:03:32.820742  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 20:03:32.878033  929978 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0916 20:03:32.878060  929978 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0916 20:03:32.927457  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:32.927492  929978 retry.go:31] will retry after 144.592731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:32.947068  929978 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0916 20:03:32.947092  929978 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0916 20:03:32.951877  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:32.951913  929978 retry.go:31] will retry after 278.759754ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 20:03:32.953423  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:32.953450  929978 retry.go:31] will retry after 259.007592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:32.966766  929978 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0916 20:03:32.966796  929978 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0916 20:03:32.988618  929978 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 20:03:32.988645  929978 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0916 20:03:33.010053  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 20:03:33.072460  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 20:03:33.087995  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:33.088069  929978 retry.go:31] will retry after 150.33609ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 20:03:33.146008  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:33.146043  929978 retry.go:31] will retry after 454.060314ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:33.213226  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 20:03:33.231626  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 20:03:33.239002  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 20:03:33.350821  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:33.350857  929978 retry.go:31] will retry after 346.912025ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 20:03:33.350900  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:33.350912  929978 retry.go:31] will retry after 491.348641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 20:03:33.389072  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:33.389106  929978 retry.go:31] will retry after 333.235615ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:33.600374  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 20:03:33.683058  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:33.683093  929978 retry.go:31] will retry after 454.345038ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:33.698249  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 20:03:33.722743  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 20:03:33.787909  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:33.787950  929978 retry.go:31] will retry after 693.489784ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 20:03:33.821994  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:33.822030  929978 retry.go:31] will retry after 674.375394ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:33.843215  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 20:03:33.914264  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:33.914296  929978 retry.go:31] will retry after 570.698049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:34.138200  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 20:03:34.211949  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:34.212019  929978 retry.go:31] will retry after 987.307392ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:34.482420  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 20:03:34.485738  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 20:03:34.497210  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 20:03:34.607045  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:34.607132  929978 retry.go:31] will retry after 951.265996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:34.624565  929978 node_ready.go:53] error getting node "old-k8s-version-908284": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-908284": dial tcp 192.168.85.2:8443: connect: connection refused
	W0916 20:03:34.627136  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:34.627196  929978 retry.go:31] will retry after 430.508845ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 20:03:34.638974  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:34.639007  929978 retry.go:31] will retry after 637.111113ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:35.058778  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 20:03:35.130060  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:35.130094  929978 retry.go:31] will retry after 1.440908849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:35.200340  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 20:03:35.271420  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:35.271463  929978 retry.go:31] will retry after 793.60548ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:35.276800  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 20:03:35.347081  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:35.347124  929978 retry.go:31] will retry after 819.503121ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:35.559492  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 20:03:35.640930  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:35.640966  929978 retry.go:31] will retry after 770.009951ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:36.065299  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 20:03:36.141904  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:36.141938  929978 retry.go:31] will retry after 1.514530014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:36.167157  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 20:03:36.240524  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:36.240557  929978 retry.go:31] will retry after 1.753608821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:36.411865  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 20:03:36.501726  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:36.501802  929978 retry.go:31] will retry after 963.422189ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:36.572063  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 20:03:36.624730  929978 node_ready.go:53] error getting node "old-k8s-version-908284": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-908284": dial tcp 192.168.85.2:8443: connect: connection refused
	W0916 20:03:36.645497  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:36.645527  929978 retry.go:31] will retry after 1.226758535s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:37.466016  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 20:03:37.545848  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:37.545887  929978 retry.go:31] will retry after 2.53403276s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:37.656849  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 20:03:37.730464  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:37.730495  929978 retry.go:31] will retry after 4.179300578s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:37.872755  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 20:03:37.946930  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:37.946964  929978 retry.go:31] will retry after 1.735319067s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:37.995135  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 20:03:38.079705  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:38.079736  929978 retry.go:31] will retry after 1.558317377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:39.125364  929978 node_ready.go:53] error getting node "old-k8s-version-908284": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-908284": dial tcp 192.168.85.2:8443: connect: connection refused
	I0916 20:03:39.638614  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 20:03:39.683011  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 20:03:39.713062  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:39.713092  929978 retry.go:31] will retry after 2.189130493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 20:03:39.763981  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:39.764016  929978 retry.go:31] will retry after 2.247390142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:40.080185  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 20:03:40.156355  929978 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:40.156385  929978 retry.go:31] will retry after 5.006531683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 20:03:41.903402  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 20:03:41.910752  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 20:03:42.012316  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 20:03:45.163714  929978 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 20:03:49.019978  929978 node_ready.go:49] node "old-k8s-version-908284" has status "Ready":"True"
	I0916 20:03:49.020004  929978 node_ready.go:38] duration metric: took 16.396173513s for node "old-k8s-version-908284" to be "Ready" ...
	I0916 20:03:49.020016  929978 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 20:03:49.109604  929978 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-h4fss" in "kube-system" namespace to be "Ready" ...
	I0916 20:03:49.240569  929978 pod_ready.go:93] pod "coredns-74ff55c5b-h4fss" in "kube-system" namespace has status "Ready":"True"
	I0916 20:03:49.240631  929978 pod_ready.go:82] duration metric: took 130.946367ms for pod "coredns-74ff55c5b-h4fss" in "kube-system" namespace to be "Ready" ...
	I0916 20:03:49.240676  929978 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-908284" in "kube-system" namespace to be "Ready" ...
	I0916 20:03:49.269080  929978 pod_ready.go:93] pod "etcd-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"True"
	I0916 20:03:49.269161  929978 pod_ready.go:82] duration metric: took 28.463905ms for pod "etcd-old-k8s-version-908284" in "kube-system" namespace to be "Ready" ...
	I0916 20:03:49.269192  929978 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-908284" in "kube-system" namespace to be "Ready" ...
	I0916 20:03:49.287017  929978 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"True"
	I0916 20:03:49.287090  929978 pod_ready.go:82] duration metric: took 17.876526ms for pod "kube-apiserver-old-k8s-version-908284" in "kube-system" namespace to be "Ready" ...
	I0916 20:03:49.287115  929978 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace to be "Ready" ...
	I0916 20:03:50.142425  929978 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.238979503s)
	I0916 20:03:50.142728  929978 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.231944167s)
	I0916 20:03:50.142763  929978 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.130412345s)
	I0916 20:03:50.142856  929978 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.979089497s)
	I0916 20:03:50.142926  929978 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-908284"
	I0916 20:03:50.143881  929978 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-908284 addons enable metrics-server
	
	I0916 20:03:50.152951  929978 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0916 20:03:50.154436  929978 addons.go:510] duration metric: took 17.755114122s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0916 20:03:51.293289  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:03:53.793680  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:03:55.793827  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:03:58.293751  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:00.295172  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:02.794086  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:05.293108  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:07.793249  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:09.793869  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:12.295834  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:14.795291  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:17.294095  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:19.794425  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:21.794752  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:23.795095  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:26.295863  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:28.793279  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:30.793984  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:33.293543  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:35.293808  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:37.294364  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:39.793785  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:41.794434  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:44.295268  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:46.793568  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:48.794748  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:50.796076  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:53.293622  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:55.793715  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:57.793791  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:04:59.794779  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:01.794994  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:03.800949  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:06.293487  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:08.793919  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:10.794119  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:13.293951  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:15.793795  929978 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:17.794658  929978 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"True"
	I0916 20:05:17.794684  929978 pod_ready.go:82] duration metric: took 1m28.507548808s for pod "kube-controller-manager-old-k8s-version-908284" in "kube-system" namespace to be "Ready" ...
	I0916 20:05:17.794697  929978 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5drw5" in "kube-system" namespace to be "Ready" ...
	I0916 20:05:17.800466  929978 pod_ready.go:93] pod "kube-proxy-5drw5" in "kube-system" namespace has status "Ready":"True"
	I0916 20:05:17.800492  929978 pod_ready.go:82] duration metric: took 5.78734ms for pod "kube-proxy-5drw5" in "kube-system" namespace to be "Ready" ...
	I0916 20:05:17.800518  929978 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-908284" in "kube-system" namespace to be "Ready" ...
	I0916 20:05:17.817160  929978 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-908284" in "kube-system" namespace has status "Ready":"True"
	I0916 20:05:17.817187  929978 pod_ready.go:82] duration metric: took 16.660177ms for pod "kube-scheduler-old-k8s-version-908284" in "kube-system" namespace to be "Ready" ...
	I0916 20:05:17.817200  929978 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace to be "Ready" ...
	I0916 20:05:19.823065  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:21.823435  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:23.823967  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:26.323357  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:28.324179  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:30.827820  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:33.325069  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:35.823156  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:38.323180  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:40.323498  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:42.323865  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:44.324749  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:46.823020  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:48.823896  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:51.324426  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:53.828268  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:56.324043  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:05:58.823569  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:00.823711  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:03.323457  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:05.323726  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:07.823262  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:09.823618  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:12.324135  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:14.823150  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:17.323266  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:19.854912  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:22.323428  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:24.323518  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:26.324317  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:28.825677  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:31.323779  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:33.324184  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:35.324407  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:37.822675  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:39.823001  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:42.324362  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:44.825513  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:47.324529  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:49.324646  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:51.822474  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:54.323686  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:56.326950  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:06:58.823828  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:01.325012  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:03.823085  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:06.324393  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:08.823902  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:10.824795  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:13.323905  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:15.324729  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:17.823527  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:20.323007  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:22.323265  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:24.323370  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:26.324382  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:28.829280  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:31.323169  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:33.822121  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:35.823282  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:37.823651  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:40.323982  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:42.325505  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:44.823281  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:47.322841  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:49.324299  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:51.823606  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:54.324229  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:56.822801  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:07:58.823239  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:00.826925  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:03.323009  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:05.329004  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:07.822262  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:09.822402  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:11.828986  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:14.324843  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:16.823348  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:19.323564  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:21.324015  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:23.822969  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:25.823074  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:28.323296  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:30.324276  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:32.822524  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:34.822871  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:36.823048  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:38.823699  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:41.323632  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:43.323993  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:45.324626  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:47.822699  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:49.823475  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:51.825200  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:54.323884  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:56.324284  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:08:58.324463  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:09:00.328101  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:09:02.822791  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:09:04.823126  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:09:06.823454  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:09:09.326657  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:09:11.823391  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:09:14.323653  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:09:16.822318  929978 pod_ready.go:103] pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace has status "Ready":"False"
	I0916 20:09:17.823086  929978 pod_ready.go:82] duration metric: took 4m0.005870228s for pod "metrics-server-9975d5f86-92f4t" in "kube-system" namespace to be "Ready" ...
	E0916 20:09:17.823114  929978 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0916 20:09:17.823124  929978 pod_ready.go:39] duration metric: took 5m28.803097177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 20:09:17.823140  929978 api_server.go:52] waiting for apiserver process to appear ...
	I0916 20:09:17.823170  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 20:09:17.823237  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 20:09:17.887703  929978 cri.go:89] found id: "5125f7e68621ccafdca0574900a08627704d25dad8c6cb286177bcceafb722f3"
	I0916 20:09:17.887725  929978 cri.go:89] found id: "29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd"
	I0916 20:09:17.887730  929978 cri.go:89] found id: ""
	I0916 20:09:17.887737  929978 logs.go:276] 2 containers: [5125f7e68621ccafdca0574900a08627704d25dad8c6cb286177bcceafb722f3 29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd]
	I0916 20:09:17.887795  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:17.892374  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:17.896470  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 20:09:17.896545  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 20:09:17.949422  929978 cri.go:89] found id: "445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb"
	I0916 20:09:17.949442  929978 cri.go:89] found id: "b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97"
	I0916 20:09:17.949448  929978 cri.go:89] found id: ""
	I0916 20:09:17.949455  929978 logs.go:276] 2 containers: [445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97]
	I0916 20:09:17.949511  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:17.953816  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:17.958202  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 20:09:17.958327  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 20:09:18.039723  929978 cri.go:89] found id: "7d87e62b9d7540a8d0fcd059083894029feeab7ea8f0a8cacd8811b01eee9456"
	I0916 20:09:18.039746  929978 cri.go:89] found id: "29165aa257b4ed82c3bed159074da5f7d4b4358a908e5b0b2f105c538186699c"
	I0916 20:09:18.039752  929978 cri.go:89] found id: ""
	I0916 20:09:18.039760  929978 logs.go:276] 2 containers: [7d87e62b9d7540a8d0fcd059083894029feeab7ea8f0a8cacd8811b01eee9456 29165aa257b4ed82c3bed159074da5f7d4b4358a908e5b0b2f105c538186699c]
	I0916 20:09:18.039813  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:18.044484  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:18.049578  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 20:09:18.049658  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 20:09:18.112435  929978 cri.go:89] found id: "7cdfa563a6ecfd1d7a2de5d367a101d25b1393c9cd20db9b8cc1ac35ca3d5911"
	I0916 20:09:18.112508  929978 cri.go:89] found id: "e72d9de27e5e47273b9413248459297d25f9204a3d3bdbe871f68a09eed8cc31"
	I0916 20:09:18.112529  929978 cri.go:89] found id: ""
	I0916 20:09:18.112553  929978 logs.go:276] 2 containers: [7cdfa563a6ecfd1d7a2de5d367a101d25b1393c9cd20db9b8cc1ac35ca3d5911 e72d9de27e5e47273b9413248459297d25f9204a3d3bdbe871f68a09eed8cc31]
	I0916 20:09:18.112655  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:18.117471  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:18.122330  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 20:09:18.122467  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 20:09:18.184161  929978 cri.go:89] found id: "d5453c9c01ecd9cc0d09b6bf868960b6312637344560d06faeca7704cd561607"
	I0916 20:09:18.184193  929978 cri.go:89] found id: "f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af"
	I0916 20:09:18.184199  929978 cri.go:89] found id: ""
	I0916 20:09:18.184213  929978 logs.go:276] 2 containers: [d5453c9c01ecd9cc0d09b6bf868960b6312637344560d06faeca7704cd561607 f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af]
	I0916 20:09:18.184306  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:18.188450  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:18.192972  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 20:09:18.193085  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 20:09:18.247607  929978 cri.go:89] found id: "e72de83d21dbee5bba189dbb08793f2149fe20dd123b4ab09058a74285c67018"
	I0916 20:09:18.247666  929978 cri.go:89] found id: "34ed62a120d6504514cf022878498c06dd4558aff9f75e87eff0e60b822c82b0"
	I0916 20:09:18.247685  929978 cri.go:89] found id: ""
	I0916 20:09:18.247708  929978 logs.go:276] 2 containers: [e72de83d21dbee5bba189dbb08793f2149fe20dd123b4ab09058a74285c67018 34ed62a120d6504514cf022878498c06dd4558aff9f75e87eff0e60b822c82b0]
	I0916 20:09:18.247769  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:18.251497  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:18.266313  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 20:09:18.266384  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 20:09:18.314122  929978 cri.go:89] found id: "eb8d587a9ae36a864b49240c05c5eb3eef4bbd1461b753f08ec024058d7f6b87"
	I0916 20:09:18.314141  929978 cri.go:89] found id: "73b796fc3c23c7bdf0b3e94af03866db4c7513e7f70c3a040a2734d0323c37be"
	I0916 20:09:18.314146  929978 cri.go:89] found id: ""
	I0916 20:09:18.314153  929978 logs.go:276] 2 containers: [eb8d587a9ae36a864b49240c05c5eb3eef4bbd1461b753f08ec024058d7f6b87 73b796fc3c23c7bdf0b3e94af03866db4c7513e7f70c3a040a2734d0323c37be]
	I0916 20:09:18.314307  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:18.318288  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:18.322294  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 20:09:18.322367  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 20:09:18.378645  929978 cri.go:89] found id: "acc8d7546233616133689ca5f3763e68bd10522884488e5cf9ab9bf09947cb67"
	I0916 20:09:18.378666  929978 cri.go:89] found id: "05471d3e1c31ae8e82b32033906c8b2d9d329a3ea9850acf38ac17d8175331ed"
	I0916 20:09:18.378670  929978 cri.go:89] found id: ""
	I0916 20:09:18.378678  929978 logs.go:276] 2 containers: [acc8d7546233616133689ca5f3763e68bd10522884488e5cf9ab9bf09947cb67 05471d3e1c31ae8e82b32033906c8b2d9d329a3ea9850acf38ac17d8175331ed]
	I0916 20:09:18.378730  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:18.382720  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:18.386402  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 20:09:18.386473  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 20:09:18.436672  929978 cri.go:89] found id: "0f208bbb678a1a4ad386a9e37deb1b867d200257d179670c26adcc562f0e4cf2"
	I0916 20:09:18.436698  929978 cri.go:89] found id: ""
	I0916 20:09:18.436707  929978 logs.go:276] 1 containers: [0f208bbb678a1a4ad386a9e37deb1b867d200257d179670c26adcc562f0e4cf2]
	I0916 20:09:18.436768  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:18.443002  929978 logs.go:123] Gathering logs for kube-scheduler [e72d9de27e5e47273b9413248459297d25f9204a3d3bdbe871f68a09eed8cc31] ...
	I0916 20:09:18.443029  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e72d9de27e5e47273b9413248459297d25f9204a3d3bdbe871f68a09eed8cc31"
	I0916 20:09:18.496133  929978 logs.go:123] Gathering logs for kube-controller-manager [e72de83d21dbee5bba189dbb08793f2149fe20dd123b4ab09058a74285c67018] ...
	I0916 20:09:18.496165  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e72de83d21dbee5bba189dbb08793f2149fe20dd123b4ab09058a74285c67018"
	I0916 20:09:18.577284  929978 logs.go:123] Gathering logs for kindnet [73b796fc3c23c7bdf0b3e94af03866db4c7513e7f70c3a040a2734d0323c37be] ...
	I0916 20:09:18.577321  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b796fc3c23c7bdf0b3e94af03866db4c7513e7f70c3a040a2734d0323c37be"
	I0916 20:09:18.627998  929978 logs.go:123] Gathering logs for storage-provisioner [acc8d7546233616133689ca5f3763e68bd10522884488e5cf9ab9bf09947cb67] ...
	I0916 20:09:18.628028  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acc8d7546233616133689ca5f3763e68bd10522884488e5cf9ab9bf09947cb67"
	I0916 20:09:18.700396  929978 logs.go:123] Gathering logs for dmesg ...
	I0916 20:09:18.700424  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 20:09:18.724222  929978 logs.go:123] Gathering logs for coredns [29165aa257b4ed82c3bed159074da5f7d4b4358a908e5b0b2f105c538186699c] ...
	I0916 20:09:18.724246  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29165aa257b4ed82c3bed159074da5f7d4b4358a908e5b0b2f105c538186699c"
	I0916 20:09:18.768720  929978 logs.go:123] Gathering logs for kube-scheduler [7cdfa563a6ecfd1d7a2de5d367a101d25b1393c9cd20db9b8cc1ac35ca3d5911] ...
	I0916 20:09:18.768747  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cdfa563a6ecfd1d7a2de5d367a101d25b1393c9cd20db9b8cc1ac35ca3d5911"
	I0916 20:09:18.837357  929978 logs.go:123] Gathering logs for storage-provisioner [05471d3e1c31ae8e82b32033906c8b2d9d329a3ea9850acf38ac17d8175331ed] ...
	I0916 20:09:18.837382  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05471d3e1c31ae8e82b32033906c8b2d9d329a3ea9850acf38ac17d8175331ed"
	I0916 20:09:18.884049  929978 logs.go:123] Gathering logs for kubernetes-dashboard [0f208bbb678a1a4ad386a9e37deb1b867d200257d179670c26adcc562f0e4cf2] ...
	I0916 20:09:18.884084  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f208bbb678a1a4ad386a9e37deb1b867d200257d179670c26adcc562f0e4cf2"
	I0916 20:09:18.925904  929978 logs.go:123] Gathering logs for container status ...
	I0916 20:09:18.925934  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 20:09:19.008646  929978 logs.go:123] Gathering logs for kube-apiserver [5125f7e68621ccafdca0574900a08627704d25dad8c6cb286177bcceafb722f3] ...
	I0916 20:09:19.008679  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5125f7e68621ccafdca0574900a08627704d25dad8c6cb286177bcceafb722f3"
	I0916 20:09:19.102558  929978 logs.go:123] Gathering logs for kube-controller-manager [34ed62a120d6504514cf022878498c06dd4558aff9f75e87eff0e60b822c82b0] ...
	I0916 20:09:19.102606  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34ed62a120d6504514cf022878498c06dd4558aff9f75e87eff0e60b822c82b0"
	I0916 20:09:19.188717  929978 logs.go:123] Gathering logs for kindnet [eb8d587a9ae36a864b49240c05c5eb3eef4bbd1461b753f08ec024058d7f6b87] ...
	I0916 20:09:19.188748  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8d587a9ae36a864b49240c05c5eb3eef4bbd1461b753f08ec024058d7f6b87"
	I0916 20:09:19.252748  929978 logs.go:123] Gathering logs for coredns [7d87e62b9d7540a8d0fcd059083894029feeab7ea8f0a8cacd8811b01eee9456] ...
	I0916 20:09:19.252774  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d87e62b9d7540a8d0fcd059083894029feeab7ea8f0a8cacd8811b01eee9456"
	I0916 20:09:19.306761  929978 logs.go:123] Gathering logs for kube-proxy [d5453c9c01ecd9cc0d09b6bf868960b6312637344560d06faeca7704cd561607] ...
	I0916 20:09:19.306788  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5453c9c01ecd9cc0d09b6bf868960b6312637344560d06faeca7704cd561607"
	I0916 20:09:19.369937  929978 logs.go:123] Gathering logs for containerd ...
	I0916 20:09:19.369966  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 20:09:19.434175  929978 logs.go:123] Gathering logs for kubelet ...
	I0916 20:09:19.434213  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0916 20:09:19.505865  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.787093     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:19.506089  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.787518     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:19.506307  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.787702     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-87lls": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-87lls" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:19.506579  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.788079     661 reflector.go:138] object-"kube-system"/"metrics-server-token-chtwf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-chtwf" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:19.506795  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.788136     661 reflector.go:138] object-"kube-system"/"kindnet-token-bsw29": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-bsw29" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:19.507013  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.788201     661 reflector.go:138] object-"default"/"default-token-sj5gg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-sj5gg" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:19.507263  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.788258     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-z86wg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-z86wg" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:19.507491  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.789351     661 reflector.go:138] object-"kube-system"/"coredns-token-t6kvz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-t6kvz" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:19.515610  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:50 old-k8s-version-908284 kubelet[661]: E0916 20:03:50.657497     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:19.516594  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:51 old-k8s-version-908284 kubelet[661]: E0916 20:03:51.471074     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.519457  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:03 old-k8s-version-908284 kubelet[661]: E0916 20:04:03.357135     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:19.521143  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:15 old-k8s-version-908284 kubelet[661]: E0916 20:04:15.348330     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.521739  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:18 old-k8s-version-908284 kubelet[661]: E0916 20:04:18.601266     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.522069  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:19 old-k8s-version-908284 kubelet[661]: E0916 20:04:19.605264     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.522396  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:20 old-k8s-version-908284 kubelet[661]: E0916 20:04:20.608597     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.525190  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:30 old-k8s-version-908284 kubelet[661]: E0916 20:04:30.358010     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:19.526195  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:35 old-k8s-version-908284 kubelet[661]: E0916 20:04:35.645652     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.526583  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:38 old-k8s-version-908284 kubelet[661]: E0916 20:04:38.822485     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.526800  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:42 old-k8s-version-908284 kubelet[661]: E0916 20:04:42.348104     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.527167  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:54 old-k8s-version-908284 kubelet[661]: E0916 20:04:54.348469     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.527368  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:57 old-k8s-version-908284 kubelet[661]: E0916 20:04:57.357496     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.527956  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:06 old-k8s-version-908284 kubelet[661]: E0916 20:05:06.731209     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.528144  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:08 old-k8s-version-908284 kubelet[661]: E0916 20:05:08.348529     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.528472  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:08 old-k8s-version-908284 kubelet[661]: E0916 20:05:08.823660     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.528798  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:20 old-k8s-version-908284 kubelet[661]: E0916 20:05:20.347593     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.531238  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:21 old-k8s-version-908284 kubelet[661]: E0916 20:05:21.355694     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:19.531433  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:32 old-k8s-version-908284 kubelet[661]: E0916 20:05:32.347612     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.531843  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:33 old-k8s-version-908284 kubelet[661]: E0916 20:05:33.347213     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.532040  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:43 old-k8s-version-908284 kubelet[661]: E0916 20:05:43.348442     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.532367  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:46 old-k8s-version-908284 kubelet[661]: E0916 20:05:46.347136     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.532555  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:56 old-k8s-version-908284 kubelet[661]: E0916 20:05:56.351275     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.533202  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:58 old-k8s-version-908284 kubelet[661]: E0916 20:05:58.873377     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.533389  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:08 old-k8s-version-908284 kubelet[661]: E0916 20:06:08.347409     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.533723  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:08 old-k8s-version-908284 kubelet[661]: E0916 20:06:08.822882     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.533910  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:22 old-k8s-version-908284 kubelet[661]: E0916 20:06:22.347687     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.534360  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:22 old-k8s-version-908284 kubelet[661]: E0916 20:06:22.348479     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.534779  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:35 old-k8s-version-908284 kubelet[661]: E0916 20:06:35.347744     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.534980  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:35 old-k8s-version-908284 kubelet[661]: E0916 20:06:35.348011     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.535308  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:50 old-k8s-version-908284 kubelet[661]: E0916 20:06:50.352305     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.537849  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:50 old-k8s-version-908284 kubelet[661]: E0916 20:06:50.358463     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:19.538124  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:01 old-k8s-version-908284 kubelet[661]: E0916 20:07:01.347599     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.538531  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:04 old-k8s-version-908284 kubelet[661]: E0916 20:07:04.348052     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.538723  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:16 old-k8s-version-908284 kubelet[661]: E0916 20:07:16.347843     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.539324  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:20 old-k8s-version-908284 kubelet[661]: E0916 20:07:20.103183     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.539651  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:28 old-k8s-version-908284 kubelet[661]: E0916 20:07:28.823105     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.539837  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:29 old-k8s-version-908284 kubelet[661]: E0916 20:07:29.348615     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.540022  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:40 old-k8s-version-908284 kubelet[661]: E0916 20:07:40.348522     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.540349  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:43 old-k8s-version-908284 kubelet[661]: E0916 20:07:43.347650     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.540535  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:53 old-k8s-version-908284 kubelet[661]: E0916 20:07:53.347525     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.540860  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:58 old-k8s-version-908284 kubelet[661]: E0916 20:07:58.348899     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.541044  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:08 old-k8s-version-908284 kubelet[661]: E0916 20:08:08.350720     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.541429  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:11 old-k8s-version-908284 kubelet[661]: E0916 20:08:11.347616     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.541630  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:21 old-k8s-version-908284 kubelet[661]: E0916 20:08:21.347550     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.541965  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:24 old-k8s-version-908284 kubelet[661]: E0916 20:08:24.347935     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.542152  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:32 old-k8s-version-908284 kubelet[661]: E0916 20:08:32.348528     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.542480  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:37 old-k8s-version-908284 kubelet[661]: E0916 20:08:37.347607     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.542665  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:47 old-k8s-version-908284 kubelet[661]: E0916 20:08:47.347660     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.542991  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:52 old-k8s-version-908284 kubelet[661]: E0916 20:08:52.352536     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.543178  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:58 old-k8s-version-908284 kubelet[661]: E0916 20:08:58.348436     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.543534  929978 logs.go:138] Found kubelet problem: Sep 16 20:09:04 old-k8s-version-908284 kubelet[661]: E0916 20:09:04.348257     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:19.543720  929978 logs.go:138] Found kubelet problem: Sep 16 20:09:12 old-k8s-version-908284 kubelet[661]: E0916 20:09:12.351486     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:19.544081  929978 logs.go:138] Found kubelet problem: Sep 16 20:09:15 old-k8s-version-908284 kubelet[661]: E0916 20:09:15.347023     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	I0916 20:09:19.544095  929978 logs.go:123] Gathering logs for describe nodes ...
	I0916 20:09:19.544109  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 20:09:19.747506  929978 logs.go:123] Gathering logs for kube-apiserver [29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd] ...
	I0916 20:09:19.747536  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd"
	I0916 20:09:19.834551  929978 logs.go:123] Gathering logs for etcd [445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb] ...
	I0916 20:09:19.834586  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb"
	I0916 20:09:19.890990  929978 logs.go:123] Gathering logs for etcd [b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97] ...
	I0916 20:09:19.891022  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97"
	I0916 20:09:19.953195  929978 logs.go:123] Gathering logs for kube-proxy [f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af] ...
	I0916 20:09:19.953231  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af"
	I0916 20:09:20.004418  929978 out.go:358] Setting ErrFile to fd 2...
	I0916 20:09:20.004449  929978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0916 20:09:20.004500  929978 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0916 20:09:20.004508  929978 out.go:270]   Sep 16 20:08:52 old-k8s-version-908284 kubelet[661]: E0916 20:08:52.352536     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	  Sep 16 20:08:52 old-k8s-version-908284 kubelet[661]: E0916 20:08:52.352536     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:20.004514  929978 out.go:270]   Sep 16 20:08:58 old-k8s-version-908284 kubelet[661]: E0916 20:08:58.348436     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 16 20:08:58 old-k8s-version-908284 kubelet[661]: E0916 20:08:58.348436     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:20.004525  929978 out.go:270]   Sep 16 20:09:04 old-k8s-version-908284 kubelet[661]: E0916 20:09:04.348257     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	  Sep 16 20:09:04 old-k8s-version-908284 kubelet[661]: E0916 20:09:04.348257     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:20.004530  929978 out.go:270]   Sep 16 20:09:12 old-k8s-version-908284 kubelet[661]: E0916 20:09:12.351486     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 16 20:09:12 old-k8s-version-908284 kubelet[661]: E0916 20:09:12.351486     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:20.004543  929978 out.go:270]   Sep 16 20:09:15 old-k8s-version-908284 kubelet[661]: E0916 20:09:15.347023     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	  Sep 16 20:09:15 old-k8s-version-908284 kubelet[661]: E0916 20:09:15.347023     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	I0916 20:09:20.004548  929978 out.go:358] Setting ErrFile to fd 2...
	I0916 20:09:20.004555  929978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 20:09:30.008755  929978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 20:09:30.033592  929978 api_server.go:72] duration metric: took 5m57.634635944s to wait for apiserver process to appear ...
	I0916 20:09:30.033618  929978 api_server.go:88] waiting for apiserver healthz status ...
	I0916 20:09:30.033657  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 20:09:30.033745  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 20:09:30.169231  929978 cri.go:89] found id: "5125f7e68621ccafdca0574900a08627704d25dad8c6cb286177bcceafb722f3"
	I0916 20:09:30.169257  929978 cri.go:89] found id: "29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd"
	I0916 20:09:30.169262  929978 cri.go:89] found id: ""
	I0916 20:09:30.169269  929978 logs.go:276] 2 containers: [5125f7e68621ccafdca0574900a08627704d25dad8c6cb286177bcceafb722f3 29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd]
	I0916 20:09:30.169329  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.186647  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.193396  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 20:09:30.193474  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 20:09:30.298899  929978 cri.go:89] found id: "445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb"
	I0916 20:09:30.298922  929978 cri.go:89] found id: "b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97"
	I0916 20:09:30.298926  929978 cri.go:89] found id: ""
	I0916 20:09:30.298934  929978 logs.go:276] 2 containers: [445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97]
	I0916 20:09:30.298992  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.303813  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.307717  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 20:09:30.307788  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 20:09:30.388113  929978 cri.go:89] found id: "7d87e62b9d7540a8d0fcd059083894029feeab7ea8f0a8cacd8811b01eee9456"
	I0916 20:09:30.388133  929978 cri.go:89] found id: "29165aa257b4ed82c3bed159074da5f7d4b4358a908e5b0b2f105c538186699c"
	I0916 20:09:30.388139  929978 cri.go:89] found id: ""
	I0916 20:09:30.388146  929978 logs.go:276] 2 containers: [7d87e62b9d7540a8d0fcd059083894029feeab7ea8f0a8cacd8811b01eee9456 29165aa257b4ed82c3bed159074da5f7d4b4358a908e5b0b2f105c538186699c]
	I0916 20:09:30.388206  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.394236  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.398922  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 20:09:30.399042  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 20:09:30.457493  929978 cri.go:89] found id: "7cdfa563a6ecfd1d7a2de5d367a101d25b1393c9cd20db9b8cc1ac35ca3d5911"
	I0916 20:09:30.457581  929978 cri.go:89] found id: "e72d9de27e5e47273b9413248459297d25f9204a3d3bdbe871f68a09eed8cc31"
	I0916 20:09:30.457603  929978 cri.go:89] found id: ""
	I0916 20:09:30.457629  929978 logs.go:276] 2 containers: [7cdfa563a6ecfd1d7a2de5d367a101d25b1393c9cd20db9b8cc1ac35ca3d5911 e72d9de27e5e47273b9413248459297d25f9204a3d3bdbe871f68a09eed8cc31]
	I0916 20:09:30.457746  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.464870  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.470379  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 20:09:30.470499  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 20:09:30.546548  929978 cri.go:89] found id: "d5453c9c01ecd9cc0d09b6bf868960b6312637344560d06faeca7704cd561607"
	I0916 20:09:30.546615  929978 cri.go:89] found id: "f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af"
	I0916 20:09:30.546637  929978 cri.go:89] found id: ""
	I0916 20:09:30.546661  929978 logs.go:276] 2 containers: [d5453c9c01ecd9cc0d09b6bf868960b6312637344560d06faeca7704cd561607 f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af]
	I0916 20:09:30.546746  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.557684  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.568074  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 20:09:30.568207  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 20:09:30.643400  929978 cri.go:89] found id: "e72de83d21dbee5bba189dbb08793f2149fe20dd123b4ab09058a74285c67018"
	I0916 20:09:30.643474  929978 cri.go:89] found id: "34ed62a120d6504514cf022878498c06dd4558aff9f75e87eff0e60b822c82b0"
	I0916 20:09:30.643502  929978 cri.go:89] found id: ""
	I0916 20:09:30.643524  929978 logs.go:276] 2 containers: [e72de83d21dbee5bba189dbb08793f2149fe20dd123b4ab09058a74285c67018 34ed62a120d6504514cf022878498c06dd4558aff9f75e87eff0e60b822c82b0]
	I0916 20:09:30.643609  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.650571  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.657181  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 20:09:30.657326  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 20:09:30.724423  929978 cri.go:89] found id: "eb8d587a9ae36a864b49240c05c5eb3eef4bbd1461b753f08ec024058d7f6b87"
	I0916 20:09:30.724443  929978 cri.go:89] found id: "73b796fc3c23c7bdf0b3e94af03866db4c7513e7f70c3a040a2734d0323c37be"
	I0916 20:09:30.724448  929978 cri.go:89] found id: ""
	I0916 20:09:30.724455  929978 logs.go:276] 2 containers: [eb8d587a9ae36a864b49240c05c5eb3eef4bbd1461b753f08ec024058d7f6b87 73b796fc3c23c7bdf0b3e94af03866db4c7513e7f70c3a040a2734d0323c37be]
	I0916 20:09:30.724510  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.742143  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.752467  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 20:09:30.752527  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 20:09:30.858864  929978 cri.go:89] found id: "0f208bbb678a1a4ad386a9e37deb1b867d200257d179670c26adcc562f0e4cf2"
	I0916 20:09:30.858883  929978 cri.go:89] found id: ""
	I0916 20:09:30.858891  929978 logs.go:276] 1 containers: [0f208bbb678a1a4ad386a9e37deb1b867d200257d179670c26adcc562f0e4cf2]
	I0916 20:09:30.858943  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.862910  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 20:09:30.862979  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 20:09:30.989879  929978 cri.go:89] found id: "acc8d7546233616133689ca5f3763e68bd10522884488e5cf9ab9bf09947cb67"
	I0916 20:09:30.989905  929978 cri.go:89] found id: "05471d3e1c31ae8e82b32033906c8b2d9d329a3ea9850acf38ac17d8175331ed"
	I0916 20:09:30.989914  929978 cri.go:89] found id: ""
	I0916 20:09:30.989922  929978 logs.go:276] 2 containers: [acc8d7546233616133689ca5f3763e68bd10522884488e5cf9ab9bf09947cb67 05471d3e1c31ae8e82b32033906c8b2d9d329a3ea9850acf38ac17d8175331ed]
	I0916 20:09:30.990001  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.995821  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:31.000489  929978 logs.go:123] Gathering logs for kube-apiserver [29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd] ...
	I0916 20:09:31.000522  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd"
	I0916 20:09:31.110698  929978 logs.go:123] Gathering logs for coredns [29165aa257b4ed82c3bed159074da5f7d4b4358a908e5b0b2f105c538186699c] ...
	I0916 20:09:31.114068  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29165aa257b4ed82c3bed159074da5f7d4b4358a908e5b0b2f105c538186699c"
	I0916 20:09:31.184128  929978 logs.go:123] Gathering logs for kube-scheduler [7cdfa563a6ecfd1d7a2de5d367a101d25b1393c9cd20db9b8cc1ac35ca3d5911] ...
	I0916 20:09:31.184159  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cdfa563a6ecfd1d7a2de5d367a101d25b1393c9cd20db9b8cc1ac35ca3d5911"
	I0916 20:09:31.250109  929978 logs.go:123] Gathering logs for kube-controller-manager [e72de83d21dbee5bba189dbb08793f2149fe20dd123b4ab09058a74285c67018] ...
	I0916 20:09:31.250138  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e72de83d21dbee5bba189dbb08793f2149fe20dd123b4ab09058a74285c67018"
	I0916 20:09:31.330890  929978 logs.go:123] Gathering logs for kindnet [eb8d587a9ae36a864b49240c05c5eb3eef4bbd1461b753f08ec024058d7f6b87] ...
	I0916 20:09:31.330929  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8d587a9ae36a864b49240c05c5eb3eef4bbd1461b753f08ec024058d7f6b87"
	I0916 20:09:31.417133  929978 logs.go:123] Gathering logs for kindnet [73b796fc3c23c7bdf0b3e94af03866db4c7513e7f70c3a040a2734d0323c37be] ...
	I0916 20:09:31.417165  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b796fc3c23c7bdf0b3e94af03866db4c7513e7f70c3a040a2734d0323c37be"
	I0916 20:09:31.476759  929978 logs.go:123] Gathering logs for kubelet ...
	I0916 20:09:31.476791  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0916 20:09:31.539110  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.787093     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.539554  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.787518     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.539819  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.787702     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-87lls": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-87lls" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.540074  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.788079     661 reflector.go:138] object-"kube-system"/"metrics-server-token-chtwf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-chtwf" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.540322  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.788136     661 reflector.go:138] object-"kube-system"/"kindnet-token-bsw29": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-bsw29" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.540558  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.788201     661 reflector.go:138] object-"default"/"default-token-sj5gg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-sj5gg" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.540810  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.788258     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-z86wg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-z86wg" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.541051  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.789351     661 reflector.go:138] object-"kube-system"/"coredns-token-t6kvz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-t6kvz" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.549884  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:50 old-k8s-version-908284 kubelet[661]: E0916 20:03:50.657497     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:31.550956  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:51 old-k8s-version-908284 kubelet[661]: E0916 20:03:51.471074     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.555254  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:03 old-k8s-version-908284 kubelet[661]: E0916 20:04:03.357135     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:31.557173  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:15 old-k8s-version-908284 kubelet[661]: E0916 20:04:15.348330     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.557835  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:18 old-k8s-version-908284 kubelet[661]: E0916 20:04:18.601266     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.558199  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:19 old-k8s-version-908284 kubelet[661]: E0916 20:04:19.605264     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.558602  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:20 old-k8s-version-908284 kubelet[661]: E0916 20:04:20.608597     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.561566  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:30 old-k8s-version-908284 kubelet[661]: E0916 20:04:30.358010     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:31.562560  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:35 old-k8s-version-908284 kubelet[661]: E0916 20:04:35.645652     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.562921  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:38 old-k8s-version-908284 kubelet[661]: E0916 20:04:38.822485     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.563132  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:42 old-k8s-version-908284 kubelet[661]: E0916 20:04:42.348104     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.563530  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:54 old-k8s-version-908284 kubelet[661]: E0916 20:04:54.348469     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.563754  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:57 old-k8s-version-908284 kubelet[661]: E0916 20:04:57.357496     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.564427  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:06 old-k8s-version-908284 kubelet[661]: E0916 20:05:06.731209     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.564644  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:08 old-k8s-version-908284 kubelet[661]: E0916 20:05:08.348529     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.565001  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:08 old-k8s-version-908284 kubelet[661]: E0916 20:05:08.823660     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.565355  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:20 old-k8s-version-908284 kubelet[661]: E0916 20:05:20.347593     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.567995  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:21 old-k8s-version-908284 kubelet[661]: E0916 20:05:21.355694     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:31.568223  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:32 old-k8s-version-908284 kubelet[661]: E0916 20:05:32.347612     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.568579  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:33 old-k8s-version-908284 kubelet[661]: E0916 20:05:33.347213     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.568792  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:43 old-k8s-version-908284 kubelet[661]: E0916 20:05:43.348442     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.569143  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:46 old-k8s-version-908284 kubelet[661]: E0916 20:05:46.347136     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.569362  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:56 old-k8s-version-908284 kubelet[661]: E0916 20:05:56.351275     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.570009  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:58 old-k8s-version-908284 kubelet[661]: E0916 20:05:58.873377     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.570272  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:08 old-k8s-version-908284 kubelet[661]: E0916 20:06:08.347409     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.570642  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:08 old-k8s-version-908284 kubelet[661]: E0916 20:06:08.822882     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.570858  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:22 old-k8s-version-908284 kubelet[661]: E0916 20:06:22.347687     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.571215  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:22 old-k8s-version-908284 kubelet[661]: E0916 20:06:22.348479     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.571595  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:35 old-k8s-version-908284 kubelet[661]: E0916 20:06:35.347744     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.571819  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:35 old-k8s-version-908284 kubelet[661]: E0916 20:06:35.348011     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.572175  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:50 old-k8s-version-908284 kubelet[661]: E0916 20:06:50.352305     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.574827  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:50 old-k8s-version-908284 kubelet[661]: E0916 20:06:50.358463     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:31.575128  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:01 old-k8s-version-908284 kubelet[661]: E0916 20:07:01.347599     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.575520  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:04 old-k8s-version-908284 kubelet[661]: E0916 20:07:04.348052     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.575751  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:16 old-k8s-version-908284 kubelet[661]: E0916 20:07:16.347843     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.576386  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:20 old-k8s-version-908284 kubelet[661]: E0916 20:07:20.103183     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.576743  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:28 old-k8s-version-908284 kubelet[661]: E0916 20:07:28.823105     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.576960  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:29 old-k8s-version-908284 kubelet[661]: E0916 20:07:29.348615     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.577183  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:40 old-k8s-version-908284 kubelet[661]: E0916 20:07:40.348522     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.577539  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:43 old-k8s-version-908284 kubelet[661]: E0916 20:07:43.347650     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.577758  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:53 old-k8s-version-908284 kubelet[661]: E0916 20:07:53.347525     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.578111  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:58 old-k8s-version-908284 kubelet[661]: E0916 20:07:58.348899     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.578325  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:08 old-k8s-version-908284 kubelet[661]: E0916 20:08:08.350720     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.578677  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:11 old-k8s-version-908284 kubelet[661]: E0916 20:08:11.347616     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.578889  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:21 old-k8s-version-908284 kubelet[661]: E0916 20:08:21.347550     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.579243  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:24 old-k8s-version-908284 kubelet[661]: E0916 20:08:24.347935     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.579459  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:32 old-k8s-version-908284 kubelet[661]: E0916 20:08:32.348528     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.579810  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:37 old-k8s-version-908284 kubelet[661]: E0916 20:08:37.347607     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.580022  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:47 old-k8s-version-908284 kubelet[661]: E0916 20:08:47.347660     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.580382  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:52 old-k8s-version-908284 kubelet[661]: E0916 20:08:52.352536     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.580592  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:58 old-k8s-version-908284 kubelet[661]: E0916 20:08:58.348436     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.580951  929978 logs.go:138] Found kubelet problem: Sep 16 20:09:04 old-k8s-version-908284 kubelet[661]: E0916 20:09:04.348257     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.581162  929978 logs.go:138] Found kubelet problem: Sep 16 20:09:12 old-k8s-version-908284 kubelet[661]: E0916 20:09:12.351486     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.581519  929978 logs.go:138] Found kubelet problem: Sep 16 20:09:15 old-k8s-version-908284 kubelet[661]: E0916 20:09:15.347023     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.581740  929978 logs.go:138] Found kubelet problem: Sep 16 20:09:25 old-k8s-version-908284 kubelet[661]: E0916 20:09:25.347602     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.582093  929978 logs.go:138] Found kubelet problem: Sep 16 20:09:26 old-k8s-version-908284 kubelet[661]: E0916 20:09:26.347078     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	I0916 20:09:31.582119  929978 logs.go:123] Gathering logs for kube-apiserver [5125f7e68621ccafdca0574900a08627704d25dad8c6cb286177bcceafb722f3] ...
	I0916 20:09:31.582148  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5125f7e68621ccafdca0574900a08627704d25dad8c6cb286177bcceafb722f3"
	I0916 20:09:31.714254  929978 logs.go:123] Gathering logs for etcd [445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb] ...
	I0916 20:09:31.714332  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb"
	I0916 20:09:31.833273  929978 logs.go:123] Gathering logs for kube-controller-manager [34ed62a120d6504514cf022878498c06dd4558aff9f75e87eff0e60b822c82b0] ...
	I0916 20:09:31.833356  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34ed62a120d6504514cf022878498c06dd4558aff9f75e87eff0e60b822c82b0"
	I0916 20:09:31.917196  929978 logs.go:123] Gathering logs for containerd ...
	I0916 20:09:31.917281  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 20:09:31.983819  929978 logs.go:123] Gathering logs for dmesg ...
	I0916 20:09:31.983909  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 20:09:32.006773  929978 logs.go:123] Gathering logs for etcd [b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97] ...
	I0916 20:09:32.006803  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97"
	I0916 20:09:32.052165  929978 logs.go:123] Gathering logs for coredns [7d87e62b9d7540a8d0fcd059083894029feeab7ea8f0a8cacd8811b01eee9456] ...
	I0916 20:09:32.052195  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d87e62b9d7540a8d0fcd059083894029feeab7ea8f0a8cacd8811b01eee9456"
	I0916 20:09:32.090881  929978 logs.go:123] Gathering logs for kube-proxy [d5453c9c01ecd9cc0d09b6bf868960b6312637344560d06faeca7704cd561607] ...
	I0916 20:09:32.090952  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5453c9c01ecd9cc0d09b6bf868960b6312637344560d06faeca7704cd561607"
	I0916 20:09:32.131065  929978 logs.go:123] Gathering logs for storage-provisioner [05471d3e1c31ae8e82b32033906c8b2d9d329a3ea9850acf38ac17d8175331ed] ...
	I0916 20:09:32.131132  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05471d3e1c31ae8e82b32033906c8b2d9d329a3ea9850acf38ac17d8175331ed"
	I0916 20:09:32.180567  929978 logs.go:123] Gathering logs for describe nodes ...
	I0916 20:09:32.180595  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 20:09:32.332711  929978 logs.go:123] Gathering logs for kube-scheduler [e72d9de27e5e47273b9413248459297d25f9204a3d3bdbe871f68a09eed8cc31] ...
	I0916 20:09:32.332739  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e72d9de27e5e47273b9413248459297d25f9204a3d3bdbe871f68a09eed8cc31"
	I0916 20:09:32.408425  929978 logs.go:123] Gathering logs for kube-proxy [f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af] ...
	I0916 20:09:32.408453  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af"
	I0916 20:09:32.454484  929978 logs.go:123] Gathering logs for kubernetes-dashboard [0f208bbb678a1a4ad386a9e37deb1b867d200257d179670c26adcc562f0e4cf2] ...
	I0916 20:09:32.454512  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f208bbb678a1a4ad386a9e37deb1b867d200257d179670c26adcc562f0e4cf2"
	I0916 20:09:32.498612  929978 logs.go:123] Gathering logs for storage-provisioner [acc8d7546233616133689ca5f3763e68bd10522884488e5cf9ab9bf09947cb67] ...
	I0916 20:09:32.498693  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acc8d7546233616133689ca5f3763e68bd10522884488e5cf9ab9bf09947cb67"
	I0916 20:09:32.539543  929978 logs.go:123] Gathering logs for container status ...
	I0916 20:09:32.539575  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 20:09:32.582090  929978 out.go:358] Setting ErrFile to fd 2...
	I0916 20:09:32.582117  929978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0916 20:09:32.582172  929978 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0916 20:09:32.582186  929978 out.go:270]   Sep 16 20:09:04 old-k8s-version-908284 kubelet[661]: E0916 20:09:04.348257     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	  Sep 16 20:09:04 old-k8s-version-908284 kubelet[661]: E0916 20:09:04.348257     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:32.582194  929978 out.go:270]   Sep 16 20:09:12 old-k8s-version-908284 kubelet[661]: E0916 20:09:12.351486     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 16 20:09:12 old-k8s-version-908284 kubelet[661]: E0916 20:09:12.351486     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:32.582206  929978 out.go:270]   Sep 16 20:09:15 old-k8s-version-908284 kubelet[661]: E0916 20:09:15.347023     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	  Sep 16 20:09:15 old-k8s-version-908284 kubelet[661]: E0916 20:09:15.347023     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:32.582220  929978 out.go:270]   Sep 16 20:09:25 old-k8s-version-908284 kubelet[661]: E0916 20:09:25.347602     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 16 20:09:25 old-k8s-version-908284 kubelet[661]: E0916 20:09:25.347602     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:32.582225  929978 out.go:270]   Sep 16 20:09:26 old-k8s-version-908284 kubelet[661]: E0916 20:09:26.347078     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	  Sep 16 20:09:26 old-k8s-version-908284 kubelet[661]: E0916 20:09:26.347078     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	I0916 20:09:32.582230  929978 out.go:358] Setting ErrFile to fd 2...
	I0916 20:09:32.582236  929978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 20:09:42.583550  929978 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0916 20:09:42.606744  929978 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0916 20:09:42.608187  929978 out.go:201] 
	W0916 20:09:42.609509  929978 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0916 20:09:42.609759  929978 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0916 20:09:42.609867  929978 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0916 20:09:42.609907  929978 out.go:270] * 
	* 
	W0916 20:09:42.610998  929978 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 20:09:42.613117  929978 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-908284 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-908284
helpers_test.go:235: (dbg) docker inspect old-k8s-version-908284:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "016d6b3189bbff3f0b88b0a753deee620ab50397949eabd50845a5ab2b9ee7b7",
	        "Created": "2024-09-16T20:00:45.432968764Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 930180,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T20:03:25.208435307Z",
	            "FinishedAt": "2024-09-16T20:03:24.215372361Z"
	        },
	        "Image": "sha256:735d22f77ce2bf9e02c77058920b4d1610fffc1af6c5e42bd1f17e7556552aac",
	        "ResolvConfPath": "/var/lib/docker/containers/016d6b3189bbff3f0b88b0a753deee620ab50397949eabd50845a5ab2b9ee7b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/016d6b3189bbff3f0b88b0a753deee620ab50397949eabd50845a5ab2b9ee7b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/016d6b3189bbff3f0b88b0a753deee620ab50397949eabd50845a5ab2b9ee7b7/hosts",
	        "LogPath": "/var/lib/docker/containers/016d6b3189bbff3f0b88b0a753deee620ab50397949eabd50845a5ab2b9ee7b7/016d6b3189bbff3f0b88b0a753deee620ab50397949eabd50845a5ab2b9ee7b7-json.log",
	        "Name": "/old-k8s-version-908284",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-908284:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-908284",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7401bcc2106d3e7f32f2715f81c62a4d82d594c4733be15951b9d19672ce7398-init/diff:/var/lib/docker/overlay2/0f997814f4acb2707641eca22120a369f13df677c67e30cebac9ef1a05c579dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7401bcc2106d3e7f32f2715f81c62a4d82d594c4733be15951b9d19672ce7398/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7401bcc2106d3e7f32f2715f81c62a4d82d594c4733be15951b9d19672ce7398/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7401bcc2106d3e7f32f2715f81c62a4d82d594c4733be15951b9d19672ce7398/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-908284",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-908284/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-908284",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-908284",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-908284",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "845753be885864f351a0ff7d91420c7f24cfe4d6388f5df064e9865ece73207b",
	            "SandboxKey": "/var/run/docker/netns/845753be8858",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33827"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-908284": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "87a5fc0a3cce76239f092237e8fbde6169a7d17d5d9775ed42e387827a0dc661",
	                    "EndpointID": "ef8d1eaf58926f3b801e0fe2642f996b54d1388b1b3d6c47ac8b3ed328cbf9f1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-908284",
	                        "016d6b3189bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-908284 -n old-k8s-version-908284
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-908284 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-908284 logs -n 25: (2.52447401s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-474910                           | force-systemd-flag-474910    | jenkins | v1.34.0 | 16 Sep 24 19:59 UTC | 16 Sep 24 19:59 UTC |
	|         | --memory=2048 --force-systemd                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | force-systemd-flag-474910                              | force-systemd-flag-474910    | jenkins | v1.34.0 | 16 Sep 24 19:59 UTC | 16 Sep 24 19:59 UTC |
	|         | ssh cat                                                |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| delete  | -p force-systemd-flag-474910                           | force-systemd-flag-474910    | jenkins | v1.34.0 | 16 Sep 24 19:59 UTC | 16 Sep 24 19:59 UTC |
	| start   | -p cert-options-105315                                 | cert-options-105315          | jenkins | v1.34.0 | 16 Sep 24 20:00 UTC | 16 Sep 24 20:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | cert-options-105315 ssh                                | cert-options-105315          | jenkins | v1.34.0 | 16 Sep 24 20:00 UTC | 16 Sep 24 20:00 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-105315 -- sudo                         | cert-options-105315          | jenkins | v1.34.0 | 16 Sep 24 20:00 UTC | 16 Sep 24 20:00 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-105315                                 | cert-options-105315          | jenkins | v1.34.0 | 16 Sep 24 20:00 UTC | 16 Sep 24 20:00 UTC |
	| start   | -p old-k8s-version-908284                              | old-k8s-version-908284       | jenkins | v1.34.0 | 16 Sep 24 20:00 UTC | 16 Sep 24 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-277633                              | cert-expiration-277633       | jenkins | v1.34.0 | 16 Sep 24 20:02 UTC | 16 Sep 24 20:02 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-277633                              | cert-expiration-277633       | jenkins | v1.34.0 | 16 Sep 24 20:02 UTC | 16 Sep 24 20:02 UTC |
	| start   | -p                                                     | default-k8s-diff-port-762419 | jenkins | v1.34.0 | 16 Sep 24 20:02 UTC | 16 Sep 24 20:04 UTC |
	|         | default-k8s-diff-port-762419                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-908284        | old-k8s-version-908284       | jenkins | v1.34.0 | 16 Sep 24 20:03 UTC | 16 Sep 24 20:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-908284                              | old-k8s-version-908284       | jenkins | v1.34.0 | 16 Sep 24 20:03 UTC | 16 Sep 24 20:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-908284             | old-k8s-version-908284       | jenkins | v1.34.0 | 16 Sep 24 20:03 UTC | 16 Sep 24 20:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-908284                              | old-k8s-version-908284       | jenkins | v1.34.0 | 16 Sep 24 20:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-762419  | default-k8s-diff-port-762419 | jenkins | v1.34.0 | 16 Sep 24 20:04 UTC | 16 Sep 24 20:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-762419 | jenkins | v1.34.0 | 16 Sep 24 20:04 UTC | 16 Sep 24 20:04 UTC |
	|         | default-k8s-diff-port-762419                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-762419       | default-k8s-diff-port-762419 | jenkins | v1.34.0 | 16 Sep 24 20:04 UTC | 16 Sep 24 20:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-762419 | jenkins | v1.34.0 | 16 Sep 24 20:04 UTC | 16 Sep 24 20:09 UTC |
	|         | default-k8s-diff-port-762419                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-762419                           | default-k8s-diff-port-762419 | jenkins | v1.34.0 | 16 Sep 24 20:09 UTC | 16 Sep 24 20:09 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-762419 | jenkins | v1.34.0 | 16 Sep 24 20:09 UTC | 16 Sep 24 20:09 UTC |
	|         | default-k8s-diff-port-762419                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-762419 | jenkins | v1.34.0 | 16 Sep 24 20:09 UTC | 16 Sep 24 20:09 UTC |
	|         | default-k8s-diff-port-762419                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-762419 | jenkins | v1.34.0 | 16 Sep 24 20:09 UTC | 16 Sep 24 20:09 UTC |
	|         | default-k8s-diff-port-762419                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-762419 | jenkins | v1.34.0 | 16 Sep 24 20:09 UTC | 16 Sep 24 20:09 UTC |
	|         | default-k8s-diff-port-762419                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-931636                                  | embed-certs-931636           | jenkins | v1.34.0 | 16 Sep 24 20:09 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 20:09:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 20:09:23.866628  939831 out.go:345] Setting OutFile to fd 1 ...
	I0916 20:09:23.866897  939831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 20:09:23.866918  939831 out.go:358] Setting ErrFile to fd 2...
	I0916 20:09:23.866924  939831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 20:09:23.867177  939831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
	I0916 20:09:23.867654  939831 out.go:352] Setting JSON to false
	I0916 20:09:23.868652  939831 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":13877,"bootTime":1726503487,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0916 20:09:23.868727  939831 start.go:139] virtualization:  
	I0916 20:09:23.870973  939831 out.go:177] * [embed-certs-931636] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0916 20:09:23.872258  939831 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 20:09:23.872342  939831 notify.go:220] Checking for updates...
	I0916 20:09:23.876371  939831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 20:09:23.877692  939831 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	I0916 20:09:23.879024  939831 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	I0916 20:09:23.880185  939831 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0916 20:09:23.881243  939831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 20:09:23.883284  939831 config.go:182] Loaded profile config "old-k8s-version-908284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 20:09:23.883446  939831 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 20:09:23.914954  939831 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 20:09:23.915086  939831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 20:09:23.980564  939831 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 20:09:23.971005476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 20:09:23.980678  939831 docker.go:318] overlay module found
	I0916 20:09:23.982186  939831 out.go:177] * Using the docker driver based on user configuration
	I0916 20:09:23.983387  939831 start.go:297] selected driver: docker
	I0916 20:09:23.983401  939831 start.go:901] validating driver "docker" against <nil>
	I0916 20:09:23.983415  939831 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 20:09:23.984055  939831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 20:09:24.050613  939831 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 20:09:24.029434805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 20:09:24.050836  939831 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 20:09:24.051070  939831 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 20:09:24.052628  939831 out.go:177] * Using Docker driver with root privileges
	I0916 20:09:24.053763  939831 cni.go:84] Creating CNI manager for ""
	I0916 20:09:24.053923  939831 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 20:09:24.053939  939831 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 20:09:24.054026  939831 start.go:340] cluster config:
	{Name:embed-certs-931636 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-931636 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 20:09:24.055817  939831 out.go:177] * Starting "embed-certs-931636" primary control-plane node in "embed-certs-931636" cluster
	I0916 20:09:24.057424  939831 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 20:09:24.059295  939831 out.go:177] * Pulling base image v0.0.45-1726481311-19649 ...
	I0916 20:09:24.061516  939831 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 20:09:24.061556  939831 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local docker daemon
	I0916 20:09:24.061580  939831 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-716050/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0916 20:09:24.061590  939831 cache.go:56] Caching tarball of preloaded images
	I0916 20:09:24.061690  939831 preload.go:172] Found /home/jenkins/minikube-integration/19649-716050/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 20:09:24.061701  939831 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 20:09:24.061838  939831 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/config.json ...
	I0916 20:09:24.061860  939831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/config.json: {Name:mkd833d35bf6fb507573effecb9914d1803de6c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 20:09:24.084069  939831 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc is of wrong architecture
	I0916 20:09:24.084098  939831 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc to local cache
	I0916 20:09:24.084204  939831 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory
	I0916 20:09:24.084237  939831 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory, skipping pull
	I0916 20:09:24.084250  939831 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc exists in cache, skipping pull
	I0916 20:09:24.084260  939831 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc as a tarball
	I0916 20:09:24.084269  939831 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc from local cache
	I0916 20:09:24.208134  939831 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc from cached tarball
	I0916 20:09:24.208171  939831 cache.go:194] Successfully downloaded all kic artifacts
	I0916 20:09:24.208202  939831 start.go:360] acquireMachinesLock for embed-certs-931636: {Name:mk514448ece5bdc4e3cb54ecafbbded83dfd09e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 20:09:24.208717  939831 start.go:364] duration metric: took 490.254µs to acquireMachinesLock for "embed-certs-931636"
	I0916 20:09:24.208756  939831 start.go:93] Provisioning new machine with config: &{Name:embed-certs-931636 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-931636 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 20:09:24.208860  939831 start.go:125] createHost starting for "" (driver="docker")
	I0916 20:09:19.747506  929978 logs.go:123] Gathering logs for kube-apiserver [29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd] ...
	I0916 20:09:19.747536  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd"
	I0916 20:09:19.834551  929978 logs.go:123] Gathering logs for etcd [445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb] ...
	I0916 20:09:19.834586  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb"
	I0916 20:09:19.890990  929978 logs.go:123] Gathering logs for etcd [b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97] ...
	I0916 20:09:19.891022  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97"
	I0916 20:09:19.953195  929978 logs.go:123] Gathering logs for kube-proxy [f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af] ...
	I0916 20:09:19.953231  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af"
	I0916 20:09:20.004418  929978 out.go:358] Setting ErrFile to fd 2...
	I0916 20:09:20.004449  929978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0916 20:09:20.004500  929978 out.go:270] X Problems detected in kubelet:
	W0916 20:09:20.004508  929978 out.go:270]   Sep 16 20:08:52 old-k8s-version-908284 kubelet[661]: E0916 20:08:52.352536     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:20.004514  929978 out.go:270]   Sep 16 20:08:58 old-k8s-version-908284 kubelet[661]: E0916 20:08:58.348436     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:20.004525  929978 out.go:270]   Sep 16 20:09:04 old-k8s-version-908284 kubelet[661]: E0916 20:09:04.348257     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:20.004530  929978 out.go:270]   Sep 16 20:09:12 old-k8s-version-908284 kubelet[661]: E0916 20:09:12.351486     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:20.004543  929978 out.go:270]   Sep 16 20:09:15 old-k8s-version-908284 kubelet[661]: E0916 20:09:15.347023     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	I0916 20:09:20.004548  929978 out.go:358] Setting ErrFile to fd 2...
	I0916 20:09:20.004555  929978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 20:09:24.211775  939831 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 20:09:24.212005  939831 start.go:159] libmachine.API.Create for "embed-certs-931636" (driver="docker")
	I0916 20:09:24.212035  939831 client.go:168] LocalClient.Create starting
	I0916 20:09:24.212097  939831 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem
	I0916 20:09:24.212137  939831 main.go:141] libmachine: Decoding PEM data...
	I0916 20:09:24.212156  939831 main.go:141] libmachine: Parsing certificate...
	I0916 20:09:24.212209  939831 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-716050/.minikube/certs/cert.pem
	I0916 20:09:24.212290  939831 main.go:141] libmachine: Decoding PEM data...
	I0916 20:09:24.212306  939831 main.go:141] libmachine: Parsing certificate...
	I0916 20:09:24.212679  939831 cli_runner.go:164] Run: docker network inspect embed-certs-931636 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 20:09:24.228317  939831 cli_runner.go:211] docker network inspect embed-certs-931636 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 20:09:24.228405  939831 network_create.go:284] running [docker network inspect embed-certs-931636] to gather additional debugging logs...
	I0916 20:09:24.228425  939831 cli_runner.go:164] Run: docker network inspect embed-certs-931636
	W0916 20:09:24.242685  939831 cli_runner.go:211] docker network inspect embed-certs-931636 returned with exit code 1
	I0916 20:09:24.242719  939831 network_create.go:287] error running [docker network inspect embed-certs-931636]: docker network inspect embed-certs-931636: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-931636 not found
	I0916 20:09:24.242732  939831 network_create.go:289] output of [docker network inspect embed-certs-931636]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-931636 not found
	
	** /stderr **
	I0916 20:09:24.242856  939831 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 20:09:24.259364  939831 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-258bf58fc56e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:93:66:d0:7a} reservation:<nil>}
	I0916 20:09:24.259905  939831 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb534575dfbd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:bb:29:75:cb} reservation:<nil>}
	I0916 20:09:24.260330  939831 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5b4aca29c5f4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d2:75:2a:d1} reservation:<nil>}
	I0916 20:09:24.260869  939831 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018e4270}
	I0916 20:09:24.260893  939831 network_create.go:124] attempt to create docker network embed-certs-931636 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0916 20:09:24.260960  939831 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-931636 embed-certs-931636
	I0916 20:09:24.330840  939831 network_create.go:108] docker network embed-certs-931636 192.168.76.0/24 created
	I0916 20:09:24.330869  939831 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-931636" container
	I0916 20:09:24.330954  939831 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 20:09:24.350555  939831 cli_runner.go:164] Run: docker volume create embed-certs-931636 --label name.minikube.sigs.k8s.io=embed-certs-931636 --label created_by.minikube.sigs.k8s.io=true
	I0916 20:09:24.368440  939831 oci.go:103] Successfully created a docker volume embed-certs-931636
	I0916 20:09:24.368531  939831 cli_runner.go:164] Run: docker run --rm --name embed-certs-931636-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-931636 --entrypoint /usr/bin/test -v embed-certs-931636:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -d /var/lib
	I0916 20:09:25.012999  939831 oci.go:107] Successfully prepared a docker volume embed-certs-931636
	I0916 20:09:25.013061  939831 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 20:09:25.013094  939831 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 20:09:25.013191  939831 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19649-716050/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-931636:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 20:09:29.677187  939831 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19649-716050/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-931636:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -I lz4 -xf /preloaded.tar -C /extractDir: (4.663935895s)
	I0916 20:09:29.677222  939831 kic.go:203] duration metric: took 4.664124018s to extract preloaded images to volume ...
	W0916 20:09:29.677374  939831 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 20:09:29.677494  939831 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 20:09:29.741229  939831 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-931636 --name embed-certs-931636 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-931636 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-931636 --network embed-certs-931636 --ip 192.168.76.2 --volume embed-certs-931636:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc
	I0916 20:09:30.105701  939831 cli_runner.go:164] Run: docker container inspect embed-certs-931636 --format={{.State.Running}}
	I0916 20:09:30.132649  939831 cli_runner.go:164] Run: docker container inspect embed-certs-931636 --format={{.State.Status}}
	I0916 20:09:30.164028  939831 cli_runner.go:164] Run: docker exec embed-certs-931636 stat /var/lib/dpkg/alternatives/iptables
	I0916 20:09:30.276534  939831 oci.go:144] the created container "embed-certs-931636" has a running status.
	I0916 20:09:30.276561  939831 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19649-716050/.minikube/machines/embed-certs-931636/id_rsa...
	I0916 20:09:30.715595  939831 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19649-716050/.minikube/machines/embed-certs-931636/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 20:09:30.751928  939831 cli_runner.go:164] Run: docker container inspect embed-certs-931636 --format={{.State.Status}}
	I0916 20:09:30.781421  939831 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 20:09:30.781439  939831 kic_runner.go:114] Args: [docker exec --privileged embed-certs-931636 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 20:09:30.893488  939831 cli_runner.go:164] Run: docker container inspect embed-certs-931636 --format={{.State.Status}}
	I0916 20:09:30.923139  939831 machine.go:93] provisionDockerMachine start ...
	I0916 20:09:30.923237  939831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931636
	I0916 20:09:30.946670  939831 main.go:141] libmachine: Using SSH client type: native
	I0916 20:09:30.946939  939831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I0916 20:09:30.946954  939831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 20:09:30.947646  939831 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0916 20:09:30.008755  929978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 20:09:30.033592  929978 api_server.go:72] duration metric: took 5m57.634635944s to wait for apiserver process to appear ...
	I0916 20:09:30.033618  929978 api_server.go:88] waiting for apiserver healthz status ...
	I0916 20:09:30.033657  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 20:09:30.033745  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 20:09:30.169231  929978 cri.go:89] found id: "5125f7e68621ccafdca0574900a08627704d25dad8c6cb286177bcceafb722f3"
	I0916 20:09:30.169257  929978 cri.go:89] found id: "29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd"
	I0916 20:09:30.169262  929978 cri.go:89] found id: ""
	I0916 20:09:30.169269  929978 logs.go:276] 2 containers: [5125f7e68621ccafdca0574900a08627704d25dad8c6cb286177bcceafb722f3 29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd]
	I0916 20:09:30.169329  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.186647  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.193396  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 20:09:30.193474  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 20:09:30.298899  929978 cri.go:89] found id: "445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb"
	I0916 20:09:30.298922  929978 cri.go:89] found id: "b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97"
	I0916 20:09:30.298926  929978 cri.go:89] found id: ""
	I0916 20:09:30.298934  929978 logs.go:276] 2 containers: [445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97]
	I0916 20:09:30.298992  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.303813  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.307717  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 20:09:30.307788  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 20:09:30.388113  929978 cri.go:89] found id: "7d87e62b9d7540a8d0fcd059083894029feeab7ea8f0a8cacd8811b01eee9456"
	I0916 20:09:30.388133  929978 cri.go:89] found id: "29165aa257b4ed82c3bed159074da5f7d4b4358a908e5b0b2f105c538186699c"
	I0916 20:09:30.388139  929978 cri.go:89] found id: ""
	I0916 20:09:30.388146  929978 logs.go:276] 2 containers: [7d87e62b9d7540a8d0fcd059083894029feeab7ea8f0a8cacd8811b01eee9456 29165aa257b4ed82c3bed159074da5f7d4b4358a908e5b0b2f105c538186699c]
	I0916 20:09:30.388206  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.394236  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.398922  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 20:09:30.399042  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 20:09:30.457493  929978 cri.go:89] found id: "7cdfa563a6ecfd1d7a2de5d367a101d25b1393c9cd20db9b8cc1ac35ca3d5911"
	I0916 20:09:30.457581  929978 cri.go:89] found id: "e72d9de27e5e47273b9413248459297d25f9204a3d3bdbe871f68a09eed8cc31"
	I0916 20:09:30.457603  929978 cri.go:89] found id: ""
	I0916 20:09:30.457629  929978 logs.go:276] 2 containers: [7cdfa563a6ecfd1d7a2de5d367a101d25b1393c9cd20db9b8cc1ac35ca3d5911 e72d9de27e5e47273b9413248459297d25f9204a3d3bdbe871f68a09eed8cc31]
	I0916 20:09:30.457746  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.464870  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.470379  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 20:09:30.470499  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 20:09:30.546548  929978 cri.go:89] found id: "d5453c9c01ecd9cc0d09b6bf868960b6312637344560d06faeca7704cd561607"
	I0916 20:09:30.546615  929978 cri.go:89] found id: "f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af"
	I0916 20:09:30.546637  929978 cri.go:89] found id: ""
	I0916 20:09:30.546661  929978 logs.go:276] 2 containers: [d5453c9c01ecd9cc0d09b6bf868960b6312637344560d06faeca7704cd561607 f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af]
	I0916 20:09:30.546746  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.557684  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.568074  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 20:09:30.568207  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 20:09:30.643400  929978 cri.go:89] found id: "e72de83d21dbee5bba189dbb08793f2149fe20dd123b4ab09058a74285c67018"
	I0916 20:09:30.643474  929978 cri.go:89] found id: "34ed62a120d6504514cf022878498c06dd4558aff9f75e87eff0e60b822c82b0"
	I0916 20:09:30.643502  929978 cri.go:89] found id: ""
	I0916 20:09:30.643524  929978 logs.go:276] 2 containers: [e72de83d21dbee5bba189dbb08793f2149fe20dd123b4ab09058a74285c67018 34ed62a120d6504514cf022878498c06dd4558aff9f75e87eff0e60b822c82b0]
	I0916 20:09:30.643609  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.650571  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.657181  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 20:09:30.657326  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 20:09:30.724423  929978 cri.go:89] found id: "eb8d587a9ae36a864b49240c05c5eb3eef4bbd1461b753f08ec024058d7f6b87"
	I0916 20:09:30.724443  929978 cri.go:89] found id: "73b796fc3c23c7bdf0b3e94af03866db4c7513e7f70c3a040a2734d0323c37be"
	I0916 20:09:30.724448  929978 cri.go:89] found id: ""
	I0916 20:09:30.724455  929978 logs.go:276] 2 containers: [eb8d587a9ae36a864b49240c05c5eb3eef4bbd1461b753f08ec024058d7f6b87 73b796fc3c23c7bdf0b3e94af03866db4c7513e7f70c3a040a2734d0323c37be]
	I0916 20:09:30.724510  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.742143  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.752467  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 20:09:30.752527  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 20:09:30.858864  929978 cri.go:89] found id: "0f208bbb678a1a4ad386a9e37deb1b867d200257d179670c26adcc562f0e4cf2"
	I0916 20:09:30.858883  929978 cri.go:89] found id: ""
	I0916 20:09:30.858891  929978 logs.go:276] 1 containers: [0f208bbb678a1a4ad386a9e37deb1b867d200257d179670c26adcc562f0e4cf2]
	I0916 20:09:30.858943  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.862910  929978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 20:09:30.862979  929978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 20:09:30.989879  929978 cri.go:89] found id: "acc8d7546233616133689ca5f3763e68bd10522884488e5cf9ab9bf09947cb67"
	I0916 20:09:30.989905  929978 cri.go:89] found id: "05471d3e1c31ae8e82b32033906c8b2d9d329a3ea9850acf38ac17d8175331ed"
	I0916 20:09:30.989914  929978 cri.go:89] found id: ""
	I0916 20:09:30.989922  929978 logs.go:276] 2 containers: [acc8d7546233616133689ca5f3763e68bd10522884488e5cf9ab9bf09947cb67 05471d3e1c31ae8e82b32033906c8b2d9d329a3ea9850acf38ac17d8175331ed]
	I0916 20:09:30.990001  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:30.995821  929978 ssh_runner.go:195] Run: which crictl
	I0916 20:09:31.000489  929978 logs.go:123] Gathering logs for kube-apiserver [29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd] ...
	I0916 20:09:31.000522  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd"
	I0916 20:09:31.110698  929978 logs.go:123] Gathering logs for coredns [29165aa257b4ed82c3bed159074da5f7d4b4358a908e5b0b2f105c538186699c] ...
	I0916 20:09:31.114068  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29165aa257b4ed82c3bed159074da5f7d4b4358a908e5b0b2f105c538186699c"
	I0916 20:09:31.184128  929978 logs.go:123] Gathering logs for kube-scheduler [7cdfa563a6ecfd1d7a2de5d367a101d25b1393c9cd20db9b8cc1ac35ca3d5911] ...
	I0916 20:09:31.184159  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cdfa563a6ecfd1d7a2de5d367a101d25b1393c9cd20db9b8cc1ac35ca3d5911"
	I0916 20:09:31.250109  929978 logs.go:123] Gathering logs for kube-controller-manager [e72de83d21dbee5bba189dbb08793f2149fe20dd123b4ab09058a74285c67018] ...
	I0916 20:09:31.250138  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e72de83d21dbee5bba189dbb08793f2149fe20dd123b4ab09058a74285c67018"
	I0916 20:09:31.330890  929978 logs.go:123] Gathering logs for kindnet [eb8d587a9ae36a864b49240c05c5eb3eef4bbd1461b753f08ec024058d7f6b87] ...
	I0916 20:09:31.330929  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8d587a9ae36a864b49240c05c5eb3eef4bbd1461b753f08ec024058d7f6b87"
	I0916 20:09:31.417133  929978 logs.go:123] Gathering logs for kindnet [73b796fc3c23c7bdf0b3e94af03866db4c7513e7f70c3a040a2734d0323c37be] ...
	I0916 20:09:31.417165  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b796fc3c23c7bdf0b3e94af03866db4c7513e7f70c3a040a2734d0323c37be"
	I0916 20:09:31.476759  929978 logs.go:123] Gathering logs for kubelet ...
	I0916 20:09:31.476791  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0916 20:09:31.539110  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.787093     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.539554  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.787518     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.539819  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.787702     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-87lls": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-87lls" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.540074  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.788079     661 reflector.go:138] object-"kube-system"/"metrics-server-token-chtwf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-chtwf" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.540322  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.788136     661 reflector.go:138] object-"kube-system"/"kindnet-token-bsw29": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-bsw29" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.540558  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.788201     661 reflector.go:138] object-"default"/"default-token-sj5gg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-sj5gg" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.540810  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.788258     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-z86wg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-z86wg" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.541051  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:48 old-k8s-version-908284 kubelet[661]: E0916 20:03:48.789351     661 reflector.go:138] object-"kube-system"/"coredns-token-t6kvz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-t6kvz" is forbidden: User "system:node:old-k8s-version-908284" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908284' and this object
	W0916 20:09:31.549884  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:50 old-k8s-version-908284 kubelet[661]: E0916 20:03:50.657497     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:31.550956  929978 logs.go:138] Found kubelet problem: Sep 16 20:03:51 old-k8s-version-908284 kubelet[661]: E0916 20:03:51.471074     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.555254  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:03 old-k8s-version-908284 kubelet[661]: E0916 20:04:03.357135     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:31.557173  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:15 old-k8s-version-908284 kubelet[661]: E0916 20:04:15.348330     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.557835  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:18 old-k8s-version-908284 kubelet[661]: E0916 20:04:18.601266     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.558199  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:19 old-k8s-version-908284 kubelet[661]: E0916 20:04:19.605264     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.558602  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:20 old-k8s-version-908284 kubelet[661]: E0916 20:04:20.608597     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.561566  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:30 old-k8s-version-908284 kubelet[661]: E0916 20:04:30.358010     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:31.562560  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:35 old-k8s-version-908284 kubelet[661]: E0916 20:04:35.645652     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.562921  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:38 old-k8s-version-908284 kubelet[661]: E0916 20:04:38.822485     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.563132  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:42 old-k8s-version-908284 kubelet[661]: E0916 20:04:42.348104     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.563530  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:54 old-k8s-version-908284 kubelet[661]: E0916 20:04:54.348469     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.563754  929978 logs.go:138] Found kubelet problem: Sep 16 20:04:57 old-k8s-version-908284 kubelet[661]: E0916 20:04:57.357496     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.564427  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:06 old-k8s-version-908284 kubelet[661]: E0916 20:05:06.731209     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.564644  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:08 old-k8s-version-908284 kubelet[661]: E0916 20:05:08.348529     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.565001  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:08 old-k8s-version-908284 kubelet[661]: E0916 20:05:08.823660     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.565355  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:20 old-k8s-version-908284 kubelet[661]: E0916 20:05:20.347593     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.567995  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:21 old-k8s-version-908284 kubelet[661]: E0916 20:05:21.355694     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:31.568223  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:32 old-k8s-version-908284 kubelet[661]: E0916 20:05:32.347612     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.568579  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:33 old-k8s-version-908284 kubelet[661]: E0916 20:05:33.347213     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.568792  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:43 old-k8s-version-908284 kubelet[661]: E0916 20:05:43.348442     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.569143  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:46 old-k8s-version-908284 kubelet[661]: E0916 20:05:46.347136     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.569362  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:56 old-k8s-version-908284 kubelet[661]: E0916 20:05:56.351275     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.570009  929978 logs.go:138] Found kubelet problem: Sep 16 20:05:58 old-k8s-version-908284 kubelet[661]: E0916 20:05:58.873377     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.570272  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:08 old-k8s-version-908284 kubelet[661]: E0916 20:06:08.347409     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.570642  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:08 old-k8s-version-908284 kubelet[661]: E0916 20:06:08.822882     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.570858  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:22 old-k8s-version-908284 kubelet[661]: E0916 20:06:22.347687     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.571215  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:22 old-k8s-version-908284 kubelet[661]: E0916 20:06:22.348479     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.571595  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:35 old-k8s-version-908284 kubelet[661]: E0916 20:06:35.347744     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.571819  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:35 old-k8s-version-908284 kubelet[661]: E0916 20:06:35.348011     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.572175  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:50 old-k8s-version-908284 kubelet[661]: E0916 20:06:50.352305     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.574827  929978 logs.go:138] Found kubelet problem: Sep 16 20:06:50 old-k8s-version-908284 kubelet[661]: E0916 20:06:50.358463     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0916 20:09:31.575128  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:01 old-k8s-version-908284 kubelet[661]: E0916 20:07:01.347599     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.575520  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:04 old-k8s-version-908284 kubelet[661]: E0916 20:07:04.348052     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.575751  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:16 old-k8s-version-908284 kubelet[661]: E0916 20:07:16.347843     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.576386  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:20 old-k8s-version-908284 kubelet[661]: E0916 20:07:20.103183     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.576743  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:28 old-k8s-version-908284 kubelet[661]: E0916 20:07:28.823105     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.576960  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:29 old-k8s-version-908284 kubelet[661]: E0916 20:07:29.348615     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.577183  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:40 old-k8s-version-908284 kubelet[661]: E0916 20:07:40.348522     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.577539  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:43 old-k8s-version-908284 kubelet[661]: E0916 20:07:43.347650     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.577758  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:53 old-k8s-version-908284 kubelet[661]: E0916 20:07:53.347525     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.578111  929978 logs.go:138] Found kubelet problem: Sep 16 20:07:58 old-k8s-version-908284 kubelet[661]: E0916 20:07:58.348899     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.578325  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:08 old-k8s-version-908284 kubelet[661]: E0916 20:08:08.350720     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.578677  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:11 old-k8s-version-908284 kubelet[661]: E0916 20:08:11.347616     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.578889  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:21 old-k8s-version-908284 kubelet[661]: E0916 20:08:21.347550     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.579243  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:24 old-k8s-version-908284 kubelet[661]: E0916 20:08:24.347935     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.579459  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:32 old-k8s-version-908284 kubelet[661]: E0916 20:08:32.348528     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.579810  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:37 old-k8s-version-908284 kubelet[661]: E0916 20:08:37.347607     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.580022  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:47 old-k8s-version-908284 kubelet[661]: E0916 20:08:47.347660     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.580382  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:52 old-k8s-version-908284 kubelet[661]: E0916 20:08:52.352536     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.580592  929978 logs.go:138] Found kubelet problem: Sep 16 20:08:58 old-k8s-version-908284 kubelet[661]: E0916 20:08:58.348436     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.580951  929978 logs.go:138] Found kubelet problem: Sep 16 20:09:04 old-k8s-version-908284 kubelet[661]: E0916 20:09:04.348257     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.581162  929978 logs.go:138] Found kubelet problem: Sep 16 20:09:12 old-k8s-version-908284 kubelet[661]: E0916 20:09:12.351486     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.581519  929978 logs.go:138] Found kubelet problem: Sep 16 20:09:15 old-k8s-version-908284 kubelet[661]: E0916 20:09:15.347023     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:31.581740  929978 logs.go:138] Found kubelet problem: Sep 16 20:09:25 old-k8s-version-908284 kubelet[661]: E0916 20:09:25.347602     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:31.582093  929978 logs.go:138] Found kubelet problem: Sep 16 20:09:26 old-k8s-version-908284 kubelet[661]: E0916 20:09:26.347078     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	I0916 20:09:31.582119  929978 logs.go:123] Gathering logs for kube-apiserver [5125f7e68621ccafdca0574900a08627704d25dad8c6cb286177bcceafb722f3] ...
	I0916 20:09:31.582148  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5125f7e68621ccafdca0574900a08627704d25dad8c6cb286177bcceafb722f3"
	I0916 20:09:31.714254  929978 logs.go:123] Gathering logs for etcd [445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb] ...
	I0916 20:09:31.714332  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb"
	I0916 20:09:31.833273  929978 logs.go:123] Gathering logs for kube-controller-manager [34ed62a120d6504514cf022878498c06dd4558aff9f75e87eff0e60b822c82b0] ...
	I0916 20:09:31.833356  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34ed62a120d6504514cf022878498c06dd4558aff9f75e87eff0e60b822c82b0"
	I0916 20:09:31.917196  929978 logs.go:123] Gathering logs for containerd ...
	I0916 20:09:31.917281  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 20:09:31.983819  929978 logs.go:123] Gathering logs for dmesg ...
	I0916 20:09:31.983909  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 20:09:32.006773  929978 logs.go:123] Gathering logs for etcd [b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97] ...
	I0916 20:09:32.006803  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97"
	I0916 20:09:32.052165  929978 logs.go:123] Gathering logs for coredns [7d87e62b9d7540a8d0fcd059083894029feeab7ea8f0a8cacd8811b01eee9456] ...
	I0916 20:09:32.052195  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d87e62b9d7540a8d0fcd059083894029feeab7ea8f0a8cacd8811b01eee9456"
	I0916 20:09:32.090881  929978 logs.go:123] Gathering logs for kube-proxy [d5453c9c01ecd9cc0d09b6bf868960b6312637344560d06faeca7704cd561607] ...
	I0916 20:09:32.090952  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5453c9c01ecd9cc0d09b6bf868960b6312637344560d06faeca7704cd561607"
	I0916 20:09:32.131065  929978 logs.go:123] Gathering logs for storage-provisioner [05471d3e1c31ae8e82b32033906c8b2d9d329a3ea9850acf38ac17d8175331ed] ...
	I0916 20:09:32.131132  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05471d3e1c31ae8e82b32033906c8b2d9d329a3ea9850acf38ac17d8175331ed"
	I0916 20:09:32.180567  929978 logs.go:123] Gathering logs for describe nodes ...
	I0916 20:09:32.180595  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 20:09:32.332711  929978 logs.go:123] Gathering logs for kube-scheduler [e72d9de27e5e47273b9413248459297d25f9204a3d3bdbe871f68a09eed8cc31] ...
	I0916 20:09:32.332739  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e72d9de27e5e47273b9413248459297d25f9204a3d3bdbe871f68a09eed8cc31"
	I0916 20:09:32.408425  929978 logs.go:123] Gathering logs for kube-proxy [f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af] ...
	I0916 20:09:32.408453  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af"
	I0916 20:09:32.454484  929978 logs.go:123] Gathering logs for kubernetes-dashboard [0f208bbb678a1a4ad386a9e37deb1b867d200257d179670c26adcc562f0e4cf2] ...
	I0916 20:09:32.454512  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f208bbb678a1a4ad386a9e37deb1b867d200257d179670c26adcc562f0e4cf2"
	I0916 20:09:32.498612  929978 logs.go:123] Gathering logs for storage-provisioner [acc8d7546233616133689ca5f3763e68bd10522884488e5cf9ab9bf09947cb67] ...
	I0916 20:09:32.498693  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acc8d7546233616133689ca5f3763e68bd10522884488e5cf9ab9bf09947cb67"
	I0916 20:09:32.539543  929978 logs.go:123] Gathering logs for container status ...
	I0916 20:09:32.539575  929978 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 20:09:32.582090  929978 out.go:358] Setting ErrFile to fd 2...
	I0916 20:09:32.582117  929978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0916 20:09:32.582172  929978 out.go:270] X Problems detected in kubelet:
	W0916 20:09:32.582186  929978 out.go:270]   Sep 16 20:09:04 old-k8s-version-908284 kubelet[661]: E0916 20:09:04.348257     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:32.582194  929978 out.go:270]   Sep 16 20:09:12 old-k8s-version-908284 kubelet[661]: E0916 20:09:12.351486     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:32.582206  929978 out.go:270]   Sep 16 20:09:15 old-k8s-version-908284 kubelet[661]: E0916 20:09:15.347023     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	W0916 20:09:32.582220  929978 out.go:270]   Sep 16 20:09:25 old-k8s-version-908284 kubelet[661]: E0916 20:09:25.347602     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 20:09:32.582225  929978 out.go:270]   Sep 16 20:09:26 old-k8s-version-908284 kubelet[661]: E0916 20:09:26.347078     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	I0916 20:09:32.582230  929978 out.go:358] Setting ErrFile to fd 2...
	I0916 20:09:32.582236  929978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 20:09:34.098897  939831 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-931636
	
	I0916 20:09:34.098926  939831 ubuntu.go:169] provisioning hostname "embed-certs-931636"
	I0916 20:09:34.099009  939831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931636
	I0916 20:09:34.116982  939831 main.go:141] libmachine: Using SSH client type: native
	I0916 20:09:34.117231  939831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I0916 20:09:34.117258  939831 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-931636 && echo "embed-certs-931636" | sudo tee /etc/hostname
	I0916 20:09:34.276609  939831 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-931636
	
	I0916 20:09:34.276693  939831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931636
	I0916 20:09:34.295184  939831 main.go:141] libmachine: Using SSH client type: native
	I0916 20:09:34.295482  939831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I0916 20:09:34.295508  939831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-931636' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-931636/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-931636' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 20:09:34.436279  939831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 20:09:34.436305  939831 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19649-716050/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-716050/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-716050/.minikube}
	I0916 20:09:34.436334  939831 ubuntu.go:177] setting up certificates
	I0916 20:09:34.436346  939831 provision.go:84] configureAuth start
	I0916 20:09:34.436406  939831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-931636
	I0916 20:09:34.452997  939831 provision.go:143] copyHostCerts
	I0916 20:09:34.453068  939831 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-716050/.minikube/key.pem, removing ...
	I0916 20:09:34.453082  939831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-716050/.minikube/key.pem
	I0916 20:09:34.453161  939831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-716050/.minikube/key.pem (1675 bytes)
	I0916 20:09:34.453257  939831 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-716050/.minikube/ca.pem, removing ...
	I0916 20:09:34.453267  939831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-716050/.minikube/ca.pem
	I0916 20:09:34.453295  939831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-716050/.minikube/ca.pem (1082 bytes)
	I0916 20:09:34.453354  939831 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-716050/.minikube/cert.pem, removing ...
	I0916 20:09:34.453363  939831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-716050/.minikube/cert.pem
	I0916 20:09:34.453388  939831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-716050/.minikube/cert.pem (1123 bytes)
	I0916 20:09:34.453445  939831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-716050/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca-key.pem org=jenkins.embed-certs-931636 san=[127.0.0.1 192.168.76.2 embed-certs-931636 localhost minikube]
	I0916 20:09:34.650696  939831 provision.go:177] copyRemoteCerts
	I0916 20:09:34.650765  939831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 20:09:34.650807  939831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931636
	I0916 20:09:34.670486  939831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/embed-certs-931636/id_rsa Username:docker}
	I0916 20:09:34.769483  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 20:09:34.796054  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 20:09:34.827990  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0916 20:09:34.852594  939831 provision.go:87] duration metric: took 416.2254ms to configureAuth
	I0916 20:09:34.852623  939831 ubuntu.go:193] setting minikube options for container-runtime
	I0916 20:09:34.852813  939831 config.go:182] Loaded profile config "embed-certs-931636": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 20:09:34.852827  939831 machine.go:96] duration metric: took 3.929671327s to provisionDockerMachine
	I0916 20:09:34.852834  939831 client.go:171] duration metric: took 10.640786856s to LocalClient.Create
	I0916 20:09:34.852847  939831 start.go:167] duration metric: took 10.640841863s to libmachine.API.Create "embed-certs-931636"
	I0916 20:09:34.852854  939831 start.go:293] postStartSetup for "embed-certs-931636" (driver="docker")
	I0916 20:09:34.852867  939831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 20:09:34.852925  939831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 20:09:34.852966  939831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931636
	I0916 20:09:34.871745  939831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/embed-certs-931636/id_rsa Username:docker}
	I0916 20:09:34.968998  939831 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 20:09:34.972666  939831 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 20:09:34.972702  939831 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 20:09:34.972713  939831 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 20:09:34.972720  939831 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 20:09:34.972730  939831 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-716050/.minikube/addons for local assets ...
	I0916 20:09:34.972795  939831 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-716050/.minikube/files for local assets ...
	I0916 20:09:34.972868  939831 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-716050/.minikube/files/etc/ssl/certs/7214282.pem -> 7214282.pem in /etc/ssl/certs
	I0916 20:09:34.972973  939831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 20:09:34.981790  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/files/etc/ssl/certs/7214282.pem --> /etc/ssl/certs/7214282.pem (1708 bytes)
	I0916 20:09:35.009466  939831 start.go:296] duration metric: took 156.591359ms for postStartSetup
	I0916 20:09:35.009961  939831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-931636
	I0916 20:09:35.029334  939831 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/config.json ...
	I0916 20:09:35.029737  939831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 20:09:35.029803  939831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931636
	I0916 20:09:35.047419  939831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/embed-certs-931636/id_rsa Username:docker}
	I0916 20:09:35.144321  939831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 20:09:35.149916  939831 start.go:128] duration metric: took 10.941037574s to createHost
	I0916 20:09:35.149939  939831 start.go:83] releasing machines lock for "embed-certs-931636", held for 10.941203978s
	I0916 20:09:35.150013  939831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-931636
	I0916 20:09:35.167841  939831 ssh_runner.go:195] Run: cat /version.json
	I0916 20:09:35.167873  939831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 20:09:35.167909  939831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931636
	I0916 20:09:35.167977  939831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-931636
	I0916 20:09:35.187721  939831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/embed-certs-931636/id_rsa Username:docker}
	I0916 20:09:35.189151  939831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/embed-certs-931636/id_rsa Username:docker}
	I0916 20:09:35.410477  939831 ssh_runner.go:195] Run: systemctl --version
	I0916 20:09:35.415246  939831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 20:09:35.419576  939831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 20:09:35.445575  939831 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 20:09:35.445656  939831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 20:09:35.474944  939831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 20:09:35.474970  939831 start.go:495] detecting cgroup driver to use...
	I0916 20:09:35.475004  939831 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 20:09:35.475066  939831 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 20:09:35.488804  939831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 20:09:35.500636  939831 docker.go:217] disabling cri-docker service (if available) ...
	I0916 20:09:35.500745  939831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 20:09:35.515000  939831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 20:09:35.530738  939831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 20:09:35.617584  939831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 20:09:35.723369  939831 docker.go:233] disabling docker service ...
	I0916 20:09:35.723444  939831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 20:09:35.745564  939831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 20:09:35.758126  939831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 20:09:35.863419  939831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 20:09:35.945997  939831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 20:09:35.957632  939831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 20:09:35.977237  939831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 20:09:35.988748  939831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 20:09:35.998963  939831 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 20:09:35.999075  939831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 20:09:36.011619  939831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 20:09:36.025291  939831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 20:09:36.035824  939831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 20:09:36.047812  939831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 20:09:36.057900  939831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 20:09:36.069652  939831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 20:09:36.081493  939831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 20:09:36.094079  939831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 20:09:36.103200  939831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 20:09:36.112581  939831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 20:09:36.210294  939831 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 20:09:36.344141  939831 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 20:09:36.344282  939831 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 20:09:36.350843  939831 start.go:563] Will wait 60s for crictl version
	I0916 20:09:36.350962  939831 ssh_runner.go:195] Run: which crictl
	I0916 20:09:36.356129  939831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 20:09:36.406239  939831 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 20:09:36.406366  939831 ssh_runner.go:195] Run: containerd --version
	I0916 20:09:36.430159  939831 ssh_runner.go:195] Run: containerd --version
	I0916 20:09:36.456654  939831 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 20:09:36.458598  939831 cli_runner.go:164] Run: docker network inspect embed-certs-931636 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 20:09:36.473099  939831 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0916 20:09:36.476834  939831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 20:09:36.487632  939831 kubeadm.go:883] updating cluster {Name:embed-certs-931636 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-931636 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 20:09:36.487750  939831 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 20:09:36.487828  939831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 20:09:36.526622  939831 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 20:09:36.526647  939831 containerd.go:534] Images already preloaded, skipping extraction
	I0916 20:09:36.526715  939831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 20:09:36.563374  939831 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 20:09:36.563399  939831 cache_images.go:84] Images are preloaded, skipping loading
	I0916 20:09:36.563407  939831 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.1 containerd true true} ...
	I0916 20:09:36.563497  939831 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-931636 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-931636 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 20:09:36.563569  939831 ssh_runner.go:195] Run: sudo crictl info
	I0916 20:09:36.603177  939831 cni.go:84] Creating CNI manager for ""
	I0916 20:09:36.603201  939831 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 20:09:36.603213  939831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 20:09:36.603235  939831 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-931636 NodeName:embed-certs-931636 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 20:09:36.603385  939831 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-931636"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 20:09:36.603453  939831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 20:09:36.612386  939831 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 20:09:36.612458  939831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 20:09:36.621172  939831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0916 20:09:36.640481  939831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 20:09:36.659174  939831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0916 20:09:36.677835  939831 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0916 20:09:36.681211  939831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 20:09:36.692144  939831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 20:09:36.774243  939831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 20:09:36.788356  939831 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636 for IP: 192.168.76.2
	I0916 20:09:36.788442  939831 certs.go:194] generating shared ca certs ...
	I0916 20:09:36.788474  939831 certs.go:226] acquiring lock for ca certs: {Name:mk293c0d980623a78c1c8e4e7829d120cb991002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 20:09:36.788645  939831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-716050/.minikube/ca.key
	I0916 20:09:36.788728  939831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-716050/.minikube/proxy-client-ca.key
	I0916 20:09:36.788761  939831 certs.go:256] generating profile certs ...
	I0916 20:09:36.788846  939831 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/client.key
	I0916 20:09:36.788894  939831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/client.crt with IP's: []
	I0916 20:09:37.317544  939831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/client.crt ...
	I0916 20:09:37.317576  939831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/client.crt: {Name:mk419c9294d834eb13e718007283ec401ddd2871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 20:09:37.318378  939831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/client.key ...
	I0916 20:09:37.318395  939831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/client.key: {Name:mkf7ffb41d4d77b600762ecb0f7286d1b1069381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 20:09:37.318499  939831 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/apiserver.key.b7941212
	I0916 20:09:37.318520  939831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/apiserver.crt.b7941212 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0916 20:09:37.882899  939831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/apiserver.crt.b7941212 ...
	I0916 20:09:37.882931  939831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/apiserver.crt.b7941212: {Name:mkff887f4045bebb4857c88fba0c2116f52236c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 20:09:37.883122  939831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/apiserver.key.b7941212 ...
	I0916 20:09:37.883144  939831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/apiserver.key.b7941212: {Name:mk172ccde9cdadecc0b2518259e2cfecec3d6fe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 20:09:37.883755  939831 certs.go:381] copying /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/apiserver.crt.b7941212 -> /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/apiserver.crt
	I0916 20:09:37.883846  939831 certs.go:385] copying /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/apiserver.key.b7941212 -> /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/apiserver.key
	I0916 20:09:37.883906  939831 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/proxy-client.key
	I0916 20:09:37.883925  939831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/proxy-client.crt with IP's: []
	I0916 20:09:38.097377  939831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/proxy-client.crt ...
	I0916 20:09:38.097411  939831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/proxy-client.crt: {Name:mk068e2677ff0e7e7d1b6c3bf2257ece7f166ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 20:09:38.098207  939831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/proxy-client.key ...
	I0916 20:09:38.098228  939831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/proxy-client.key: {Name:mkec825a3301c21fa8dba07f5e611f441d70a70b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 20:09:38.098902  939831 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/721428.pem (1338 bytes)
	W0916 20:09:38.098950  939831 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-716050/.minikube/certs/721428_empty.pem, impossibly tiny 0 bytes
	I0916 20:09:38.098959  939831 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 20:09:38.098987  939831 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/ca.pem (1082 bytes)
	I0916 20:09:38.099035  939831 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/cert.pem (1123 bytes)
	I0916 20:09:38.099064  939831 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/certs/key.pem (1675 bytes)
	I0916 20:09:38.099123  939831 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-716050/.minikube/files/etc/ssl/certs/7214282.pem (1708 bytes)
	I0916 20:09:38.099904  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 20:09:38.125673  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 20:09:38.150917  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 20:09:38.179689  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 20:09:38.206129  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0916 20:09:38.238907  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 20:09:38.264206  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 20:09:38.289296  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/embed-certs-931636/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 20:09:38.320109  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/files/etc/ssl/certs/7214282.pem --> /usr/share/ca-certificates/7214282.pem (1708 bytes)
	I0916 20:09:38.351232  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 20:09:38.383404  939831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-716050/.minikube/certs/721428.pem --> /usr/share/ca-certificates/721428.pem (1338 bytes)
	I0916 20:09:38.410365  939831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 20:09:38.429573  939831 ssh_runner.go:195] Run: openssl version
	I0916 20:09:38.436960  939831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7214282.pem && ln -fs /usr/share/ca-certificates/7214282.pem /etc/ssl/certs/7214282.pem"
	I0916 20:09:38.447256  939831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7214282.pem
	I0916 20:09:38.451018  939831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 19:23 /usr/share/ca-certificates/7214282.pem
	I0916 20:09:38.451129  939831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7214282.pem
	I0916 20:09:38.458249  939831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7214282.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 20:09:38.468207  939831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 20:09:38.477918  939831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 20:09:38.481546  939831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 19:12 /usr/share/ca-certificates/minikubeCA.pem
	I0916 20:09:38.481642  939831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 20:09:38.488839  939831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 20:09:38.498417  939831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/721428.pem && ln -fs /usr/share/ca-certificates/721428.pem /etc/ssl/certs/721428.pem"
	I0916 20:09:38.508483  939831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/721428.pem
	I0916 20:09:38.512262  939831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 19:23 /usr/share/ca-certificates/721428.pem
	I0916 20:09:38.512365  939831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/721428.pem
	I0916 20:09:38.519955  939831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/721428.pem /etc/ssl/certs/51391683.0"
	I0916 20:09:38.529087  939831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 20:09:38.532356  939831 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 20:09:38.532454  939831 kubeadm.go:392] StartCluster: {Name:embed-certs-931636 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-931636 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 20:09:38.532534  939831 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 20:09:38.532595  939831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 20:09:38.570301  939831 cri.go:89] found id: ""
	I0916 20:09:38.570372  939831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 20:09:38.578989  939831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 20:09:38.588021  939831 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 20:09:38.588140  939831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 20:09:38.597013  939831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 20:09:38.597037  939831 kubeadm.go:157] found existing configuration files:
	
	I0916 20:09:38.597093  939831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 20:09:38.606194  939831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 20:09:38.606300  939831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 20:09:38.614778  939831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 20:09:38.623845  939831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 20:09:38.623914  939831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 20:09:38.632980  939831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 20:09:38.642175  939831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 20:09:38.642267  939831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 20:09:38.652845  939831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 20:09:38.662484  939831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 20:09:38.662558  939831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 20:09:38.671199  939831 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 20:09:38.714598  939831 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 20:09:38.714903  939831 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 20:09:38.734439  939831 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 20:09:38.734529  939831 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0916 20:09:38.734568  939831 kubeadm.go:310] OS: Linux
	I0916 20:09:38.734627  939831 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 20:09:38.734681  939831 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 20:09:38.734732  939831 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 20:09:38.734783  939831 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 20:09:38.734833  939831 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 20:09:38.734884  939831 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 20:09:38.734932  939831 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 20:09:38.734983  939831 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 20:09:38.735031  939831 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 20:09:38.809862  939831 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 20:09:38.809977  939831 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 20:09:38.810075  939831 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 20:09:38.818425  939831 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 20:09:38.821040  939831 out.go:235]   - Generating certificates and keys ...
	I0916 20:09:38.821146  939831 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 20:09:38.821222  939831 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 20:09:42.583550  929978 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0916 20:09:42.606744  929978 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0916 20:09:42.608187  929978 out.go:201] 
	W0916 20:09:42.609509  929978 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0916 20:09:42.609759  929978 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0916 20:09:42.609867  929978 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0916 20:09:42.609907  929978 out.go:270] * 
	W0916 20:09:42.610998  929978 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 20:09:42.613117  929978 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	defd3bd309095       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   0fec3f3d85b11       dashboard-metrics-scraper-8d5bb5db8-f4csf
	0f208bbb678a1       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   ca7150cfe3355       kubernetes-dashboard-cd95d586-tvt7k
	eb8d587a9ae36       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   d94c168d93dd2       kindnet-jwhtp
	7d87e62b9d754       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   9aebf67bb79d7       coredns-74ff55c5b-h4fss
	acc8d75462336       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         1                   05f18d10bf09b       storage-provisioner
	d5453c9c01ecd       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   5e7c41b75cb24       kube-proxy-5drw5
	a1a087c34a14d       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   dcdba9574df9d       busybox
	445fc47c23468       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   d14a11c20ccfc       etcd-old-k8s-version-908284
	7cdfa563a6ecf       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   9563f591205ce       kube-scheduler-old-k8s-version-908284
	5125f7e68621c       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   dc26362899998       kube-apiserver-old-k8s-version-908284
	e72de83d21dbe       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   d3e60be21cf24       kube-controller-manager-old-k8s-version-908284
	9d5f84a96fb26       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   1db3a0ddb80bb       busybox
	29165aa257b4e       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   fe342c6f04000       coredns-74ff55c5b-h4fss
	73b796fc3c23c       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   674aa502ed33d       kindnet-jwhtp
	05471d3e1c31a       ba04bb24b9575       8 minutes ago       Exited              storage-provisioner         0                   b0fe379120d01       storage-provisioner
	f1a07ea1e6c19       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   3e8d03c9dbbf1       kube-proxy-5drw5
	b1144ab00f4c3       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   6feb07e5b9ae1       etcd-old-k8s-version-908284
	34ed62a120d65       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   dc492d0fa9f1b       kube-controller-manager-old-k8s-version-908284
	29503eaa5c2ae       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   612d6019d5e2f       kube-apiserver-old-k8s-version-908284
	e72d9de27e5e4       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   f2f21ab526b25       kube-scheduler-old-k8s-version-908284
	
	
	==> containerd <==
	Sep 16 20:05:58 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:05:58.375670858Z" level=info msg="CreateContainer within sandbox \"0fec3f3d85b11de175bcb99ccb1b511edf8e89ed324014297783ee2a235e7ac9\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"d3492ea792d01f56ed2e814d6b98ede57d3df314aaf02b38a6070f054187710b\""
	Sep 16 20:05:58 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:05:58.376465236Z" level=info msg="StartContainer for \"d3492ea792d01f56ed2e814d6b98ede57d3df314aaf02b38a6070f054187710b\""
	Sep 16 20:05:58 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:05:58.457395100Z" level=info msg="StartContainer for \"d3492ea792d01f56ed2e814d6b98ede57d3df314aaf02b38a6070f054187710b\" returns successfully"
	Sep 16 20:05:58 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:05:58.498765882Z" level=info msg="shim disconnected" id=d3492ea792d01f56ed2e814d6b98ede57d3df314aaf02b38a6070f054187710b namespace=k8s.io
	Sep 16 20:05:58 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:05:58.498830857Z" level=warning msg="cleaning up after shim disconnected" id=d3492ea792d01f56ed2e814d6b98ede57d3df314aaf02b38a6070f054187710b namespace=k8s.io
	Sep 16 20:05:58 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:05:58.498842081Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 20:05:58 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:05:58.874988017Z" level=info msg="RemoveContainer for \"d0da4979f2dd23b682605ee1a856ddb74cf7e9bed5a6d950b160014b89c4a813\""
	Sep 16 20:05:58 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:05:58.880547793Z" level=info msg="RemoveContainer for \"d0da4979f2dd23b682605ee1a856ddb74cf7e9bed5a6d950b160014b89c4a813\" returns successfully"
	Sep 16 20:06:50 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:06:50.349130963Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 20:06:50 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:06:50.355670746Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 16 20:06:50 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:06:50.357753193Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 16 20:06:50 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:06:50.357855082Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 16 20:07:19 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:07:19.349510713Z" level=info msg="CreateContainer within sandbox \"0fec3f3d85b11de175bcb99ccb1b511edf8e89ed324014297783ee2a235e7ac9\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Sep 16 20:07:19 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:07:19.370768350Z" level=info msg="CreateContainer within sandbox \"0fec3f3d85b11de175bcb99ccb1b511edf8e89ed324014297783ee2a235e7ac9\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"defd3bd309095d30f292ae8ed2aa371c620195aee4bef87a90b374284c0b6079\""
	Sep 16 20:07:19 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:07:19.371575642Z" level=info msg="StartContainer for \"defd3bd309095d30f292ae8ed2aa371c620195aee4bef87a90b374284c0b6079\""
	Sep 16 20:07:19 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:07:19.441624284Z" level=info msg="StartContainer for \"defd3bd309095d30f292ae8ed2aa371c620195aee4bef87a90b374284c0b6079\" returns successfully"
	Sep 16 20:07:19 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:07:19.466804910Z" level=info msg="shim disconnected" id=defd3bd309095d30f292ae8ed2aa371c620195aee4bef87a90b374284c0b6079 namespace=k8s.io
	Sep 16 20:07:19 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:07:19.466862714Z" level=warning msg="cleaning up after shim disconnected" id=defd3bd309095d30f292ae8ed2aa371c620195aee4bef87a90b374284c0b6079 namespace=k8s.io
	Sep 16 20:07:19 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:07:19.466873241Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 20:07:20 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:07:20.107414949Z" level=info msg="RemoveContainer for \"d3492ea792d01f56ed2e814d6b98ede57d3df314aaf02b38a6070f054187710b\""
	Sep 16 20:07:20 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:07:20.114608469Z" level=info msg="RemoveContainer for \"d3492ea792d01f56ed2e814d6b98ede57d3df314aaf02b38a6070f054187710b\" returns successfully"
	Sep 16 20:09:37 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:09:37.350011788Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 20:09:37 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:09:37.358342554Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 16 20:09:37 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:09:37.359448957Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 16 20:09:37 old-k8s-version-908284 containerd[570]: time="2024-09-16T20:09:37.359589180Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [29165aa257b4ed82c3bed159074da5f7d4b4358a908e5b0b2f105c538186699c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:51827 - 22318 "HINFO IN 165091669525511166.1401606358352449879. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010340098s
	
	
	==> coredns [7d87e62b9d7540a8d0fcd059083894029feeab7ea8f0a8cacd8811b01eee9456] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:57730 - 5279 "HINFO IN 1894957423654855324.3862667013363610242. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023821647s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-908284
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-908284
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=old-k8s-version-908284
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T20_01_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 20:01:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-908284
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 20:09:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 20:04:49 +0000   Mon, 16 Sep 2024 20:01:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 20:04:49 +0000   Mon, 16 Sep 2024 20:01:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 20:04:49 +0000   Mon, 16 Sep 2024 20:01:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 20:04:49 +0000   Mon, 16 Sep 2024 20:01:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-908284
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa55f0e7d9c24a30a447f09a51929c1a
	  System UUID:                0a6faea2-2a69-4c2e-92ee-edcd51f75eeb
	  Boot ID:                    486805ab-1132-42a1-beb7-17af684154aa
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 coredns-74ff55c5b-h4fss                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m3s
	  kube-system                 etcd-old-k8s-version-908284                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m11s
	  kube-system                 kindnet-jwhtp                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m3s
	  kube-system                 kube-apiserver-old-k8s-version-908284             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-controller-manager-old-k8s-version-908284    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-proxy-5drw5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 kube-scheduler-old-k8s-version-908284             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 metrics-server-9975d5f86-92f4t                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m32s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-f4csf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-tvt7k               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m30s (x4 over 8m31s)  kubelet     Node old-k8s-version-908284 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s (x4 over 8m31s)  kubelet     Node old-k8s-version-908284 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s (x4 over 8m31s)  kubelet     Node old-k8s-version-908284 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m12s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m11s                  kubelet     Node old-k8s-version-908284 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m11s                  kubelet     Node old-k8s-version-908284 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m11s                  kubelet     Node old-k8s-version-908284 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m11s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m3s                   kubelet     Node old-k8s-version-908284 status is now: NodeReady
	  Normal  Starting                 8m2s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m4s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m4s (x7 over 6m4s)    kubelet     Node old-k8s-version-908284 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s (x8 over 6m4s)    kubelet     Node old-k8s-version-908284 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s (x8 over 6m4s)    kubelet     Node old-k8s-version-908284 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m4s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m53s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Sep16 18:44] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [445fc47c234683c922f709203c6b1528636824cf9904de9437bfad48f5bb40bb] <==
	2024-09-16 20:05:43.598053 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:05:53.598191 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:06:03.598225 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:06:13.598135 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:06:23.598203 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:06:33.598180 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:06:43.598124 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:06:53.598087 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:07:03.598241 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:07:13.598084 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:07:23.598070 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:07:33.598021 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:07:43.598052 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:07:53.597932 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:08:03.598098 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:08:13.598132 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:08:23.598005 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:08:33.598187 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:08:43.598025 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:08:53.598067 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:09:03.597998 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:09:13.598193 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:09:23.598167 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:09:33.598085 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:09:43.598205 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [b1144ab00f4c3249a9dbe4fed6b1368a14fe5aba7783b45f4fa53cc5e203ce97] <==
	2024-09-16 20:01:14.923758 I | etcdserver: 9f0758e1c58a86ed as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-09-16 20:01:14.923781 I | embed: listening for peers on 192.168.85.2:2380
	raft2024/09/16 20:01:15 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/09/16 20:01:15 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/09/16 20:01:15 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/09/16 20:01:15 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/09/16 20:01:15 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-09-16 20:01:15.388362 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-16 20:01:15.389819 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-16 20:01:15.389912 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-16 20:01:15.390106 I | etcdserver: published {Name:old-k8s-version-908284 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-09-16 20:01:15.394787 I | embed: ready to serve client requests
	2024-09-16 20:01:15.396432 I | embed: serving client requests on 192.168.85.2:2379
	2024-09-16 20:01:15.403182 I | embed: ready to serve client requests
	2024-09-16 20:01:15.405331 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-16 20:01:38.905874 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:01:48.152195 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:01:58.151989 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:02:08.152052 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:02:18.152283 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:02:28.152105 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:02:38.152170 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:02:48.157718 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:02:58.152334 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 20:03:08.152240 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 20:09:45 up  3:51,  0 users,  load average: 1.72, 2.00, 2.45
	Linux old-k8s-version-908284 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [73b796fc3c23c7bdf0b3e94af03866db4c7513e7f70c3a040a2734d0323c37be] <==
	I0916 20:01:44.627965       1 main.go:148] setting mtu 1500 for CNI 
	I0916 20:01:44.627987       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 20:01:44.628001       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 20:01:45.021887       1 controller.go:334] Starting controller kube-network-policies
	I0916 20:01:45.021914       1 controller.go:338] Waiting for informer caches to sync
	I0916 20:01:45.021921       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 20:01:45.322909       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 20:01:45.322941       1 metrics.go:61] Registering metrics
	I0916 20:01:45.323000       1 controller.go:374] Syncing nftables rules
	I0916 20:01:55.021638       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:01:55.021704       1 main.go:299] handling current node
	I0916 20:02:05.021413       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:02:05.021465       1 main.go:299] handling current node
	I0916 20:02:15.029058       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:02:15.029099       1 main.go:299] handling current node
	I0916 20:02:25.028556       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:02:25.028593       1 main.go:299] handling current node
	I0916 20:02:35.021016       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:02:35.021156       1 main.go:299] handling current node
	I0916 20:02:45.021624       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:02:45.021662       1 main.go:299] handling current node
	I0916 20:02:55.026962       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:02:55.027236       1 main.go:299] handling current node
	I0916 20:03:05.020972       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:03:05.021070       1 main.go:299] handling current node
	
	
	==> kindnet [eb8d587a9ae36a864b49240c05c5eb3eef4bbd1461b753f08ec024058d7f6b87] <==
	I0916 20:07:43.330723       1 main.go:299] handling current node
	I0916 20:07:53.321724       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:07:53.321757       1 main.go:299] handling current node
	I0916 20:08:03.323408       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:08:03.323442       1 main.go:299] handling current node
	I0916 20:08:13.330456       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:08:13.330491       1 main.go:299] handling current node
	I0916 20:08:23.330059       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:08:23.330091       1 main.go:299] handling current node
	I0916 20:08:33.329313       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:08:33.329344       1 main.go:299] handling current node
	I0916 20:08:43.330061       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:08:43.330095       1 main.go:299] handling current node
	I0916 20:08:53.321727       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:08:53.321770       1 main.go:299] handling current node
	I0916 20:09:03.329089       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:09:03.329122       1 main.go:299] handling current node
	I0916 20:09:13.330062       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:09:13.330096       1 main.go:299] handling current node
	I0916 20:09:23.330991       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:09:23.331032       1 main.go:299] handling current node
	I0916 20:09:33.327385       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:09:33.327421       1 main.go:299] handling current node
	I0916 20:09:43.329188       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 20:09:43.329223       1 main.go:299] handling current node
	
	
	==> kube-apiserver [29503eaa5c2ae2b3f8ad37d2fa456369ed87669ef1293a388883a927f2d6f5bd] <==
	I0916 20:01:22.521570       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0916 20:01:22.521740       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0916 20:01:22.530655       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0916 20:01:22.536746       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0916 20:01:22.536768       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0916 20:01:23.030956       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 20:01:23.070927       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0916 20:01:23.226234       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0916 20:01:23.227567       1 controller.go:606] quota admission added evaluator for: endpoints
	I0916 20:01:23.231296       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 20:01:24.221937       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0916 20:01:24.505318       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0916 20:01:24.597726       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0916 20:01:32.936531       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 20:01:41.492503       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0916 20:01:41.518344       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0916 20:01:48.125487       1 client.go:360] parsed scheme: "passthrough"
	I0916 20:01:48.125531       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 20:01:48.125540       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 20:02:27.911366       1 client.go:360] parsed scheme: "passthrough"
	I0916 20:02:27.911412       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 20:02:27.911421       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 20:03:11.883201       1 client.go:360] parsed scheme: "passthrough"
	I0916 20:03:11.883258       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 20:03:11.883267       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [5125f7e68621ccafdca0574900a08627704d25dad8c6cb286177bcceafb722f3] <==
	I0916 20:06:42.905924       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 20:06:42.905933       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0916 20:06:51.380119       1 handler_proxy.go:102] no RequestInfo found in the context
	E0916 20:06:51.380208       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0916 20:06:51.380217       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 20:07:19.596516       1 client.go:360] parsed scheme: "passthrough"
	I0916 20:07:19.596561       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 20:07:19.596570       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 20:07:54.125992       1 client.go:360] parsed scheme: "passthrough"
	I0916 20:07:54.126036       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 20:07:54.126045       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 20:08:29.661429       1 client.go:360] parsed scheme: "passthrough"
	I0916 20:08:29.661471       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 20:08:29.661479       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0916 20:08:50.012466       1 handler_proxy.go:102] no RequestInfo found in the context
	E0916 20:08:50.012714       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0916 20:08:50.012733       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 20:09:02.595541       1 client.go:360] parsed scheme: "passthrough"
	I0916 20:09:02.595585       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 20:09:02.595606       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 20:09:41.921421       1 client.go:360] parsed scheme: "passthrough"
	I0916 20:09:41.921734       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 20:09:41.921850       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [34ed62a120d6504514cf022878498c06dd4558aff9f75e87eff0e60b822c82b0] <==
	I0916 20:01:41.531993       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0916 20:01:41.532695       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0916 20:01:41.532895       1 shared_informer.go:247] Caches are synced for resource quota 
	I0916 20:01:41.534385       1 shared_informer.go:247] Caches are synced for stateful set 
	I0916 20:01:41.540484       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0916 20:01:41.553474       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jwhtp"
	I0916 20:01:41.556462       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5drw5"
	I0916 20:01:41.566709       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-wjmhv"
	I0916 20:01:41.580303       1 shared_informer.go:247] Caches are synced for taint 
	I0916 20:01:41.580529       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W0916 20:01:41.583655       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-908284. Assuming now as a timestamp.
	I0916 20:01:41.584981       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	I0916 20:01:41.582970       1 event.go:291] "Event occurred" object="old-k8s-version-908284" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-908284 event: Registered Node old-k8s-version-908284 in Controller"
	I0916 20:01:41.583180       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0916 20:01:41.648282       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-h4fss"
	I0916 20:01:41.904103       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0916 20:01:42.004613       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0916 20:01:42.025386       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0916 20:01:42.025408       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0916 20:01:42.073886       1 request.go:655] Throttling request took 1.046424988s, request: GET:https://192.168.85.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	I0916 20:01:42.876106       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0916 20:01:42.876166       1 shared_informer.go:247] Caches are synced for resource quota 
	I0916 20:01:43.116057       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0916 20:01:43.135783       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-wjmhv"
	I0916 20:03:11.627052       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-controller-manager [e72de83d21dbee5bba189dbb08793f2149fe20dd123b4ab09058a74285c67018] <==
	E0916 20:05:39.073205       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 20:05:46.121672       1 request.go:655] Throttling request took 1.047727874s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0916 20:05:46.973191       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 20:06:09.575142       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 20:06:18.623637       1 request.go:655] Throttling request took 1.048520984s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0916 20:06:19.475119       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 20:06:40.079859       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 20:06:51.125771       1 request.go:655] Throttling request took 1.006719354s, request: GET:https://192.168.85.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W0916 20:06:51.977431       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 20:07:10.581676       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 20:07:23.627824       1 request.go:655] Throttling request took 1.048337534s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0916 20:07:24.479259       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 20:07:41.083584       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 20:07:56.129773       1 request.go:655] Throttling request took 1.048209128s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0916 20:07:56.981159       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 20:08:11.585634       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 20:08:28.631723       1 request.go:655] Throttling request took 1.048448229s, request: GET:https://192.168.85.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	W0916 20:08:29.483095       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 20:08:42.087709       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 20:09:01.133730       1 request.go:655] Throttling request took 1.034551245s, request: GET:https://192.168.85.2:8443/apis/node.k8s.io/v1beta1?timeout=32s
	W0916 20:09:01.985331       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 20:09:12.589624       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 20:09:33.635937       1 request.go:655] Throttling request took 1.04849804s, request: GET:https://192.168.85.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0916 20:09:34.487403       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 20:09:43.095819       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [d5453c9c01ecd9cc0d09b6bf868960b6312637344560d06faeca7704cd561607] <==
	I0916 20:03:51.059431       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0916 20:03:51.059512       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0916 20:03:51.081432       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 20:03:51.081602       1 server_others.go:185] Using iptables Proxier.
	I0916 20:03:51.081934       1 server.go:650] Version: v1.20.0
	I0916 20:03:51.089210       1 config.go:224] Starting endpoint slice config controller
	I0916 20:03:51.089233       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 20:03:51.089388       1 config.go:315] Starting service config controller
	I0916 20:03:51.089399       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 20:03:51.189341       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0916 20:03:51.189451       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [f1a07ea1e6c1902d18aad82fc148e1704278cff58e2f38adcb46954946abe5af] <==
	I0916 20:01:42.620499       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0916 20:01:42.620609       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0916 20:01:42.662892       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 20:01:42.662984       1 server_others.go:185] Using iptables Proxier.
	I0916 20:01:42.663189       1 server.go:650] Version: v1.20.0
	I0916 20:01:42.663711       1 config.go:315] Starting service config controller
	I0916 20:01:42.663719       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 20:01:42.664353       1 config.go:224] Starting endpoint slice config controller
	I0916 20:01:42.664361       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 20:01:42.763841       1 shared_informer.go:247] Caches are synced for service config 
	I0916 20:01:42.764432       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [7cdfa563a6ecfd1d7a2de5d367a101d25b1393c9cd20db9b8cc1ac35ca3d5911] <==
	I0916 20:03:44.088027       1 serving.go:331] Generated self-signed cert in-memory
	W0916 20:03:48.736282       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 20:03:48.736489       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 20:03:48.736582       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 20:03:48.736650       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 20:03:49.052268       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0916 20:03:49.058675       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 20:03:49.058702       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 20:03:49.058722       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0916 20:03:49.159417       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [e72d9de27e5e47273b9413248459297d25f9204a3d3bdbe871f68a09eed8cc31] <==
	W0916 20:01:21.752278       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 20:01:21.793900       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0916 20:01:21.794194       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 20:01:21.795065       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 20:01:21.794682       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0916 20:01:21.797955       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 20:01:21.798035       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 20:01:21.801159       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 20:01:21.801573       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 20:01:21.801629       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 20:01:21.801697       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 20:01:21.801755       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 20:01:21.801810       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 20:01:21.802538       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 20:01:21.802611       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 20:01:21.802663       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 20:01:21.804141       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 20:01:22.658566       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 20:01:22.697559       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 20:01:22.715571       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 20:01:22.810774       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 20:01:22.826211       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 20:01:22.833903       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 20:01:23.014630       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0916 20:01:25.895414       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 16 20:08:11 old-k8s-version-908284 kubelet[661]: E0916 20:08:11.347616     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	Sep 16 20:08:21 old-k8s-version-908284 kubelet[661]: E0916 20:08:21.347550     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 20:08:24 old-k8s-version-908284 kubelet[661]: I0916 20:08:24.347543     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: defd3bd309095d30f292ae8ed2aa371c620195aee4bef87a90b374284c0b6079
	Sep 16 20:08:24 old-k8s-version-908284 kubelet[661]: E0916 20:08:24.347935     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	Sep 16 20:08:32 old-k8s-version-908284 kubelet[661]: E0916 20:08:32.348528     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 20:08:37 old-k8s-version-908284 kubelet[661]: I0916 20:08:37.346752     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: defd3bd309095d30f292ae8ed2aa371c620195aee4bef87a90b374284c0b6079
	Sep 16 20:08:37 old-k8s-version-908284 kubelet[661]: E0916 20:08:37.347607     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	Sep 16 20:08:47 old-k8s-version-908284 kubelet[661]: E0916 20:08:47.347660     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 20:08:52 old-k8s-version-908284 kubelet[661]: I0916 20:08:52.348143     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: defd3bd309095d30f292ae8ed2aa371c620195aee4bef87a90b374284c0b6079
	Sep 16 20:08:52 old-k8s-version-908284 kubelet[661]: E0916 20:08:52.352536     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	Sep 16 20:08:58 old-k8s-version-908284 kubelet[661]: E0916 20:08:58.348436     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 20:09:04 old-k8s-version-908284 kubelet[661]: I0916 20:09:04.347798     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: defd3bd309095d30f292ae8ed2aa371c620195aee4bef87a90b374284c0b6079
	Sep 16 20:09:04 old-k8s-version-908284 kubelet[661]: E0916 20:09:04.348257     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	Sep 16 20:09:12 old-k8s-version-908284 kubelet[661]: E0916 20:09:12.351486     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 20:09:15 old-k8s-version-908284 kubelet[661]: I0916 20:09:15.346658     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: defd3bd309095d30f292ae8ed2aa371c620195aee4bef87a90b374284c0b6079
	Sep 16 20:09:15 old-k8s-version-908284 kubelet[661]: E0916 20:09:15.347023     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	Sep 16 20:09:25 old-k8s-version-908284 kubelet[661]: E0916 20:09:25.347602     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 20:09:26 old-k8s-version-908284 kubelet[661]: I0916 20:09:26.346727     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: defd3bd309095d30f292ae8ed2aa371c620195aee4bef87a90b374284c0b6079
	Sep 16 20:09:26 old-k8s-version-908284 kubelet[661]: E0916 20:09:26.347078     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	Sep 16 20:09:37 old-k8s-version-908284 kubelet[661]: I0916 20:09:37.346870     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: defd3bd309095d30f292ae8ed2aa371c620195aee4bef87a90b374284c0b6079
	Sep 16 20:09:37 old-k8s-version-908284 kubelet[661]: E0916 20:09:37.347798     661 pod_workers.go:191] Error syncing pod 76b4241b-0d25-4778-8c3a-942e26c51c8a ("dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-f4csf_kubernetes-dashboard(76b4241b-0d25-4778-8c3a-942e26c51c8a)"
	Sep 16 20:09:37 old-k8s-version-908284 kubelet[661]: E0916 20:09:37.359955     661 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 16 20:09:37 old-k8s-version-908284 kubelet[661]: E0916 20:09:37.360107     661 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 16 20:09:37 old-k8s-version-908284 kubelet[661]: E0916 20:09:37.360357     661 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-chtwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-92f4t_kube-system(33fe933
5-f85f-4c1d-be16-ba14e2c4de6b): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 16 20:09:37 old-k8s-version-908284 kubelet[661]: E0916 20:09:37.360554     661 pod_workers.go:191] Error syncing pod 33fe9335-f85f-4c1d-be16-ba14e2c4de6b ("metrics-server-9975d5f86-92f4t_kube-system(33fe9335-f85f-4c1d-be16-ba14e2c4de6b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> kubernetes-dashboard [0f208bbb678a1a4ad386a9e37deb1b867d200257d179670c26adcc562f0e4cf2] <==
	2024/09/16 20:04:12 Using namespace: kubernetes-dashboard
	2024/09/16 20:04:12 Using in-cluster config to connect to apiserver
	2024/09/16 20:04:12 Using secret token for csrf signing
	2024/09/16 20:04:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 20:04:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 20:04:12 Successful initial request to the apiserver, version: v1.20.0
	2024/09/16 20:04:12 Generating JWE encryption key
	2024/09/16 20:04:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 20:04:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 20:04:12 Initializing JWE encryption key from synchronized object
	2024/09/16 20:04:12 Creating in-cluster Sidecar client
	2024/09/16 20:04:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 20:04:12 Serving insecurely on HTTP port: 9090
	2024/09/16 20:04:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 20:05:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 20:05:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 20:06:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 20:06:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 20:07:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 20:07:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 20:08:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 20:08:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 20:09:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 20:09:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 20:04:12 Starting overwatch
	
	
	==> storage-provisioner [05471d3e1c31ae8e82b32033906c8b2d9d329a3ea9850acf38ac17d8175331ed] <==
	I0916 20:01:43.652024       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 20:01:43.684295       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 20:01:43.684352       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 20:01:43.704723       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7986af8-4f57-42e7-b677-3b6eac6f4e4b", APIVersion:"v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-908284_a069465e-ac7b-4554-842d-3eb503763ab2 became leader
	I0916 20:01:43.705357       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 20:01:43.705468       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-908284_a069465e-ac7b-4554-842d-3eb503763ab2!
	I0916 20:01:43.806071       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-908284_a069465e-ac7b-4554-842d-3eb503763ab2!
	
	
	==> storage-provisioner [acc8d7546233616133689ca5f3763e68bd10522884488e5cf9ab9bf09947cb67] <==
	I0916 20:03:51.644952       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 20:03:51.665162       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 20:03:51.665211       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 20:04:09.117311       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 20:04:09.117793       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-908284_969ef2ed-331b-452d-9e7b-5ce3705bbb06!
	I0916 20:04:09.119206       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7986af8-4f57-42e7-b677-3b6eac6f4e4b", APIVersion:"v1", ResourceVersion:"753", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-908284_969ef2ed-331b-452d-9e7b-5ce3705bbb06 became leader
	I0916 20:04:09.218621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-908284_969ef2ed-331b-452d-9e7b-5ce3705bbb06!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-908284 -n old-k8s-version-908284
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-908284 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-92f4t
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-908284 describe pod metrics-server-9975d5f86-92f4t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-908284 describe pod metrics-server-9975d5f86-92f4t: exit status 1 (89.088306ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-92f4t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-908284 describe pod metrics-server-9975d5f86-92f4t: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (382.11s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.9
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 7.88
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.22
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 266.26
31 TestAddons/serial/GCPAuth/Namespaces 0.16
33 TestAddons/parallel/Registry 16.42
34 TestAddons/parallel/Ingress 20.67
35 TestAddons/parallel/InspektorGadget 10.85
36 TestAddons/parallel/MetricsServer 5.78
39 TestAddons/parallel/CSI 54.87
40 TestAddons/parallel/Headlamp 11.31
41 TestAddons/parallel/CloudSpanner 5.61
42 TestAddons/parallel/LocalPath 53.87
43 TestAddons/parallel/NvidiaDevicePlugin 6.55
44 TestAddons/parallel/Yakd 12.07
45 TestAddons/StoppedEnableDisable 12.34
46 TestCertOptions 36.99
47 TestCertExpiration 223.84
49 TestForceSystemdFlag 37.45
50 TestForceSystemdEnv 34.9
51 TestDockerEnvContainerd 45.58
56 TestErrorSpam/setup 31.55
57 TestErrorSpam/start 0.67
58 TestErrorSpam/status 1.07
59 TestErrorSpam/pause 1.76
60 TestErrorSpam/unpause 1.87
61 TestErrorSpam/stop 1.51
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 50.01
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.01
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.1
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.22
73 TestFunctional/serial/CacheCmd/cache/add_local 1.23
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.02
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 43.94
82 TestFunctional/serial/ComponentHealth 0.12
83 TestFunctional/serial/LogsCmd 1.73
84 TestFunctional/serial/LogsFileCmd 1.72
85 TestFunctional/serial/InvalidService 4.64
87 TestFunctional/parallel/ConfigCmd 0.43
88 TestFunctional/parallel/DashboardCmd 10.02
89 TestFunctional/parallel/DryRun 0.4
90 TestFunctional/parallel/InternationalLanguage 0.18
91 TestFunctional/parallel/StatusCmd 1.02
95 TestFunctional/parallel/ServiceCmdConnect 7.79
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 25.89
99 TestFunctional/parallel/SSHCmd 0.53
100 TestFunctional/parallel/CpCmd 1.95
102 TestFunctional/parallel/FileSync 0.33
103 TestFunctional/parallel/CertSync 2.05
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
111 TestFunctional/parallel/License 0.23
112 TestFunctional/parallel/Version/short 0.08
113 TestFunctional/parallel/Version/components 1.44
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.56
119 TestFunctional/parallel/ImageCommands/Setup 0.8
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.48
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.43
125 TestFunctional/parallel/ServiceCmd/DeployApp 10.29
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.55
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.33
136 TestFunctional/parallel/ServiceCmd/List 0.36
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
139 TestFunctional/parallel/ServiceCmd/Format 0.37
140 TestFunctional/parallel/ServiceCmd/URL 0.38
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
148 TestFunctional/parallel/ProfileCmd/profile_list 0.4
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
150 TestFunctional/parallel/MountCmd/any-port 7.94
151 TestFunctional/parallel/MountCmd/specific-port 1.8
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.93
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.03
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 119.18
160 TestMultiControlPlane/serial/DeployApp 31.69
161 TestMultiControlPlane/serial/PingHostFromPods 1.55
162 TestMultiControlPlane/serial/AddWorkerNode 23.07
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.75
165 TestMultiControlPlane/serial/CopyFile 19.05
166 TestMultiControlPlane/serial/StopSecondaryNode 12.83
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 19.1
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.81
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 143.67
171 TestMultiControlPlane/serial/DeleteSecondaryNode 9.83
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
173 TestMultiControlPlane/serial/StopCluster 36
174 TestMultiControlPlane/serial/RestartCluster 78.84
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
176 TestMultiControlPlane/serial/AddSecondaryNode 46.06
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.77
181 TestJSONOutput/start/Command 51.7
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.74
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.66
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.78
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
206 TestKicCustomNetwork/create_custom_network 37.13
207 TestKicCustomNetwork/use_default_bridge_network 34.91
208 TestKicExistingNetwork 33.3
209 TestKicCustomSubnet 34.39
210 TestKicStaticIP 34.23
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 65.6
215 TestMountStart/serial/StartWithMountFirst 7.18
216 TestMountStart/serial/VerifyMountFirst 0.3
217 TestMountStart/serial/StartWithMountSecond 6.92
218 TestMountStart/serial/VerifyMountSecond 0.28
219 TestMountStart/serial/DeleteFirst 1.63
220 TestMountStart/serial/VerifyMountPostDelete 0.28
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.51
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 68.16
227 TestMultiNode/serial/DeployApp2Nodes 16.84
228 TestMultiNode/serial/PingHostFrom2Pods 1.02
229 TestMultiNode/serial/AddNode 18.9
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.34
232 TestMultiNode/serial/CopyFile 10.28
233 TestMultiNode/serial/StopNode 2.27
234 TestMultiNode/serial/StartAfterStop 9.7
235 TestMultiNode/serial/RestartKeepsNodes 113.99
236 TestMultiNode/serial/DeleteNode 5.48
237 TestMultiNode/serial/StopMultiNode 24.04
238 TestMultiNode/serial/RestartMultiNode 52.95
239 TestMultiNode/serial/ValidateNameConflict 35.92
244 TestPreload 116.13
246 TestScheduledStopUnix 107.83
249 TestInsufficientStorage 10.67
250 TestRunningBinaryUpgrade 78.93
252 TestKubernetesUpgrade 353.33
253 TestMissingContainerUpgrade 177.39
255 TestPause/serial/Start 98.46
256 TestPause/serial/SecondStartNoReconfiguration 6.26
257 TestPause/serial/Pause 0.76
258 TestPause/serial/VerifyStatus 0.31
259 TestPause/serial/Unpause 0.66
260 TestPause/serial/PauseAgain 0.84
261 TestPause/serial/DeletePaused 2.49
262 TestPause/serial/VerifyDeletedResources 0.14
263 TestStoppedBinaryUpgrade/Setup 0.71
264 TestStoppedBinaryUpgrade/Upgrade 119.92
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
275 TestNoKubernetes/serial/StartWithK8s 42.37
276 TestNoKubernetes/serial/StartWithStopK8s 19.25
284 TestNetworkPlugins/group/false 3.6
288 TestNoKubernetes/serial/Start 10.07
289 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
290 TestNoKubernetes/serial/ProfileList 1.07
291 TestNoKubernetes/serial/Stop 1.26
292 TestNoKubernetes/serial/StartNoArgs 7.01
293 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
295 TestStartStop/group/old-k8s-version/serial/FirstStart 144.11
297 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 90.82
298 TestStartStop/group/old-k8s-version/serial/DeployApp 9.64
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.61
300 TestStartStop/group/old-k8s-version/serial/Stop 12.19
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
303 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.44
304 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.18
305 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.08
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.12
308 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.11
310 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
311 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.55
313 TestStartStop/group/embed-certs/serial/FirstStart 93.11
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
317 TestStartStop/group/old-k8s-version/serial/Pause 4.45
319 TestStartStop/group/no-preload/serial/FirstStart 59.37
320 TestStartStop/group/embed-certs/serial/DeployApp 11.36
321 TestStartStop/group/no-preload/serial/DeployApp 10.35
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
323 TestStartStop/group/embed-certs/serial/Stop 12.09
324 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
325 TestStartStop/group/no-preload/serial/Stop 12.1
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
327 TestStartStop/group/embed-certs/serial/SecondStart 267.65
328 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
329 TestStartStop/group/no-preload/serial/SecondStart 272.48
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
333 TestStartStop/group/embed-certs/serial/Pause 3.31
334 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/newest-cni/serial/FirstStart 41.15
337 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
338 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
339 TestStartStop/group/no-preload/serial/Pause 4.59
340 TestNetworkPlugins/group/auto/Start 95.27
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.44
343 TestStartStop/group/newest-cni/serial/Stop 1.34
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
345 TestStartStop/group/newest-cni/serial/SecondStart 20.14
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
349 TestStartStop/group/newest-cni/serial/Pause 2.96
350 TestNetworkPlugins/group/kindnet/Start 88.68
351 TestNetworkPlugins/group/auto/KubeletFlags 0.31
352 TestNetworkPlugins/group/auto/NetCatPod 9.27
353 TestNetworkPlugins/group/auto/DNS 0.2
354 TestNetworkPlugins/group/auto/Localhost 0.17
355 TestNetworkPlugins/group/auto/HairPin 0.15
356 TestNetworkPlugins/group/calico/Start 64.61
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
359 TestNetworkPlugins/group/kindnet/NetCatPod 10.38
360 TestNetworkPlugins/group/kindnet/DNS 0.17
361 TestNetworkPlugins/group/kindnet/Localhost 0.21
362 TestNetworkPlugins/group/kindnet/HairPin 0.21
363 TestNetworkPlugins/group/custom-flannel/Start 55.33
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.35
366 TestNetworkPlugins/group/calico/NetCatPod 10.36
367 TestNetworkPlugins/group/calico/DNS 0.23
368 TestNetworkPlugins/group/calico/Localhost 0.18
369 TestNetworkPlugins/group/calico/HairPin 0.18
370 TestNetworkPlugins/group/enable-default-cni/Start 48.39
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.27
373 TestNetworkPlugins/group/custom-flannel/DNS 0.22
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
376 TestNetworkPlugins/group/flannel/Start 51
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.46
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
382 TestNetworkPlugins/group/bridge/Start 73.39
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
385 TestNetworkPlugins/group/flannel/NetCatPod 10.32
386 TestNetworkPlugins/group/flannel/DNS 0.22
387 TestNetworkPlugins/group/flannel/Localhost 0.2
388 TestNetworkPlugins/group/flannel/HairPin 0.21
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
390 TestNetworkPlugins/group/bridge/NetCatPod 9.26
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (13.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-655237 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-655237 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.898280662s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-655237
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-655237: exit status 85 (65.881629ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-655237 | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC |          |
	|         | -p download-only-655237        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 19:11:28
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 19:11:28.153427  721433 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:11:28.153570  721433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:11:28.153580  721433 out.go:358] Setting ErrFile to fd 2...
	I0916 19:11:28.153585  721433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:11:28.153835  721433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
	W0916 19:11:28.153977  721433 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19649-716050/.minikube/config/config.json: open /home/jenkins/minikube-integration/19649-716050/.minikube/config/config.json: no such file or directory
	I0916 19:11:28.154364  721433 out.go:352] Setting JSON to true
	I0916 19:11:28.155252  721433 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10402,"bootTime":1726503487,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0916 19:11:28.155357  721433 start.go:139] virtualization:  
	I0916 19:11:28.157796  721433 out.go:97] [download-only-655237] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0916 19:11:28.158000  721433 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19649-716050/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 19:11:28.158040  721433 notify.go:220] Checking for updates...
	I0916 19:11:28.159729  721433 out.go:169] MINIKUBE_LOCATION=19649
	I0916 19:11:28.161174  721433 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 19:11:28.162866  721433 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	I0916 19:11:28.164360  721433 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	I0916 19:11:28.165845  721433 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0916 19:11:28.168712  721433 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 19:11:28.169006  721433 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 19:11:28.193417  721433 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 19:11:28.193549  721433 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:11:28.250144  721433 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 19:11:28.239598892 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:11:28.250264  721433 docker.go:318] overlay module found
	I0916 19:11:28.251876  721433 out.go:97] Using the docker driver based on user configuration
	I0916 19:11:28.251908  721433 start.go:297] selected driver: docker
	I0916 19:11:28.251916  721433 start.go:901] validating driver "docker" against <nil>
	I0916 19:11:28.252039  721433 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:11:28.304537  721433 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 19:11:28.294639931 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:11:28.304792  721433 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 19:11:28.305072  721433 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0916 19:11:28.305226  721433 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 19:11:28.307042  721433 out.go:169] Using Docker driver with root privileges
	I0916 19:11:28.308264  721433 cni.go:84] Creating CNI manager for ""
	I0916 19:11:28.308333  721433 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 19:11:28.308346  721433 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 19:11:28.308422  721433 start.go:340] cluster config:
	{Name:download-only-655237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-655237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:11:28.310135  721433 out.go:97] Starting "download-only-655237" primary control-plane node in "download-only-655237" cluster
	I0916 19:11:28.310155  721433 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 19:11:28.311492  721433 out.go:97] Pulling base image v0.0.45-1726481311-19649 ...
	I0916 19:11:28.311519  721433 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0916 19:11:28.311626  721433 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local docker daemon
	I0916 19:11:28.326880  721433 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc to local cache
	I0916 19:11:28.327069  721433 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory
	I0916 19:11:28.327163  721433 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc to local cache
	I0916 19:11:28.368065  721433 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0916 19:11:28.368104  721433 cache.go:56] Caching tarball of preloaded images
	I0916 19:11:28.368359  721433 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0916 19:11:28.370559  721433 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 19:11:28.370619  721433 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0916 19:11:28.463404  721433 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19649-716050/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0916 19:11:35.846288  721433 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0916 19:11:35.846423  721433 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19649-716050/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0916 19:11:36.971006  721433 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0916 19:11:36.971560  721433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/download-only-655237/config.json ...
	I0916 19:11:36.971627  721433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/download-only-655237/config.json: {Name:mk8d7d5d2c2d7e0e9be0540690d560790b4aadf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:11:36.971942  721433 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0916 19:11:36.972245  721433 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19649-716050/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-655237 host does not exist
	  To start a cluster, run: "minikube start -p download-only-655237"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-655237
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (7.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-558265 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-558265 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.881613309s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (7.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-558265
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-558265: exit status 85 (63.438419ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-655237 | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC |                     |
	|         | -p download-only-655237        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC | 16 Sep 24 19:11 UTC |
	| delete  | -p download-only-655237        | download-only-655237 | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC | 16 Sep 24 19:11 UTC |
	| start   | -o=json --download-only        | download-only-558265 | jenkins | v1.34.0 | 16 Sep 24 19:11 UTC |                     |
	|         | -p download-only-558265        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 19:11:42
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 19:11:42.466885  721633 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:11:42.467025  721633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:11:42.467036  721633 out.go:358] Setting ErrFile to fd 2...
	I0916 19:11:42.467042  721633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:11:42.467306  721633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
	I0916 19:11:42.467750  721633 out.go:352] Setting JSON to true
	I0916 19:11:42.468611  721633 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10416,"bootTime":1726503487,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0916 19:11:42.468687  721633 start.go:139] virtualization:  
	I0916 19:11:42.471130  721633 out.go:97] [download-only-558265] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0916 19:11:42.471377  721633 notify.go:220] Checking for updates...
	I0916 19:11:42.472837  721633 out.go:169] MINIKUBE_LOCATION=19649
	I0916 19:11:42.474177  721633 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 19:11:42.476218  721633 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	I0916 19:11:42.477891  721633 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	I0916 19:11:42.479035  721633 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0916 19:11:42.481850  721633 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 19:11:42.482112  721633 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 19:11:42.503308  721633 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 19:11:42.503451  721633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:11:42.568901  721633 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-16 19:11:42.559085811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:11:42.569048  721633 docker.go:318] overlay module found
	I0916 19:11:42.570412  721633 out.go:97] Using the docker driver based on user configuration
	I0916 19:11:42.570443  721633 start.go:297] selected driver: docker
	I0916 19:11:42.570451  721633 start.go:901] validating driver "docker" against <nil>
	I0916 19:11:42.570560  721633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:11:42.632499  721633 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-16 19:11:42.622867191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:11:42.632723  721633 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 19:11:42.633032  721633 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0916 19:11:42.633221  721633 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 19:11:42.634763  721633 out.go:169] Using Docker driver with root privileges
	I0916 19:11:42.636079  721633 cni.go:84] Creating CNI manager for ""
	I0916 19:11:42.636160  721633 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 19:11:42.636176  721633 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 19:11:42.636260  721633 start.go:340] cluster config:
	{Name:download-only-558265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-558265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:11:42.637709  721633 out.go:97] Starting "download-only-558265" primary control-plane node in "download-only-558265" cluster
	I0916 19:11:42.637735  721633 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 19:11:42.639022  721633 out.go:97] Pulling base image v0.0.45-1726481311-19649 ...
	I0916 19:11:42.639061  721633 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 19:11:42.639164  721633 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local docker daemon
	I0916 19:11:42.655278  721633 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc to local cache
	I0916 19:11:42.655431  721633 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory
	I0916 19:11:42.655463  721633 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory, skipping pull
	I0916 19:11:42.655473  721633 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc exists in cache, skipping pull
	I0916 19:11:42.655481  721633 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc as a tarball
	I0916 19:11:42.699089  721633 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0916 19:11:42.699113  721633 cache.go:56] Caching tarball of preloaded images
	I0916 19:11:42.699292  721633 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 19:11:42.701028  721633 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0916 19:11:42.701054  721633 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0916 19:11:42.777454  721633 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19649-716050/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-558265 host does not exist
	  To start a cluster, run: "minikube start -p download-only-558265"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-558265
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-811581 --alsologtostderr --binary-mirror http://127.0.0.1:44327 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-811581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-811581
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-350900
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-350900: exit status 85 (61.746186ms)

                                                
                                                
-- stdout --
	* Profile "addons-350900" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-350900"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-350900
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-350900: exit status 85 (68.832287ms)

                                                
                                                
-- stdout --
	* Profile "addons-350900" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-350900"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (266.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-350900 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-350900 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (4m26.260759575s)
--- PASS: TestAddons/Setup (266.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-350900 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-350900 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.737755ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-fvpw8" [171ab3c7-51d1-4291-9dba-020409c54d0f] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006431326s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ttbjb" [85f2d6eb-dda8-4b16-a66c-74e84652b805] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004954005s
addons_test.go:342: (dbg) Run:  kubectl --context addons-350900 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-350900 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-350900 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.309225306s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-350900 ip
2024/09/16 19:20:13 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-350900 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-350900 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-350900 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-350900 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [55f4506d-3887-437f-a93b-22b481598ebb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [55f4506d-3887-437f-a93b-22b481598ebb] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004464476s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-350900 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-350900 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-350900 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-350900 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-350900 addons disable ingress-dns --alsologtostderr -v=1: (1.178099227s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-350900 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-350900 addons disable ingress --alsologtostderr -v=1: (7.808948517s)
--- PASS: TestAddons/parallel/Ingress (20.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-l5blt" [db32689d-8dbb-4ef1-9bb0-f4933889f4e1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.009179666s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-350900
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-350900: (5.839983317s)
--- PASS: TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 5.033869ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-vp94l" [81bed691-751a-44db-8189-67b355235987] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005442633s
addons_test.go:417: (dbg) Run:  kubectl --context addons-350900 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-350900 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.524155ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-350900 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-350900 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3388056d-c699-4b17-87f9-4ee74e98c4cf] Pending
helpers_test.go:344: "task-pv-pod" [3388056d-c699-4b17-87f9-4ee74e98c4cf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3388056d-c699-4b17-87f9-4ee74e98c4cf] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004440898s
addons_test.go:590: (dbg) Run:  kubectl --context addons-350900 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-350900 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-350900 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-350900 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-350900 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-350900 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-350900 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3a644b8a-cbff-476e-95f8-53891d2c7589] Pending
helpers_test.go:344: "task-pv-pod-restore" [3a644b8a-cbff-476e-95f8-53891d2c7589] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3a644b8a-cbff-476e-95f8-53891d2c7589] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003569299s
addons_test.go:632: (dbg) Run:  kubectl --context addons-350900 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-350900 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-350900 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-350900 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-350900 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.752840674s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-350900 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-350900 addons disable volumesnapshots --alsologtostderr -v=1: (1.041908168s)
--- PASS: TestAddons/parallel/CSI (54.87s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-350900 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-8gt8d" [db711f20-1e83-4aa1-a8b8-9c56753ad398] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-8gt8d" [db711f20-1e83-4aa1-a8b8-9c56753ad398] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-8gt8d" [db711f20-1e83-4aa1-a8b8-9c56753ad398] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003672532s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-350900 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (11.31s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-dx8wr" [5bdaa72b-9d96-4cdd-8eef-91db921c94b2] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004790292s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-350900
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.87s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-350900 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-350900 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-350900 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ac1e6a18-8807-43d5-a935-7f04b53e7939] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ac1e6a18-8807-43d5-a935-7f04b53e7939] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ac1e6a18-8807-43d5-a935-7f04b53e7939] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003402977s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-350900 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-350900 ssh "cat /opt/local-path-provisioner/pvc-09e384ed-e06e-4615-b323-5628143d2348_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-350900 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-350900 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-350900 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-350900 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.611488359s)
--- PASS: TestAddons/parallel/LocalPath (53.87s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4vbhs" [b56b0b22-520b-4e8e-b4c1-4f9fb8b9f945] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004127488s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-350900
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-khgpx" [0ce60230-227d-4065-ab74-de0c24db6abf] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00445134s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-350900 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-350900 addons disable yakd --alsologtostderr -v=1: (6.060856695s)
--- PASS: TestAddons/parallel/Yakd (12.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-350900
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-350900: (12.055824267s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-350900
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-350900
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-350900
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (36.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-105315 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0916 20:00:31.996596  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-105315 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.312679743s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-105315 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-105315 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-105315 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-105315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-105315
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-105315: (1.989230319s)
--- PASS: TestCertOptions (36.99s)

                                                
                                    
x
+
TestCertExpiration (223.84s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-277633 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-277633 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (34.374031156s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-277633 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-277633 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.152516903s)
helpers_test.go:175: Cleaning up "cert-expiration-277633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-277633
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-277633: (2.316789274s)
--- PASS: TestCertExpiration (223.84s)

                                                
                                    
x
+
TestForceSystemdFlag (37.45s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-474910 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-474910 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.240168793s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-474910 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-474910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-474910
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-474910: (1.927202103s)
--- PASS: TestForceSystemdFlag (37.45s)

                                                
                                    
x
+
TestForceSystemdEnv (34.9s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-916239 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-916239 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.575789016s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-916239 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-916239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-916239
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-916239: (2.04987426s)
--- PASS: TestForceSystemdEnv (34.90s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.58s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-238034 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-238034 --driver=docker  --container-runtime=containerd: (29.878730475s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-238034"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-238034": (1.019201607s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XV8vhi7RN4hK/agent.741528" SSH_AGENT_PID="741529" DOCKER_HOST=ssh://docker@127.0.0.1:33535 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XV8vhi7RN4hK/agent.741528" SSH_AGENT_PID="741529" DOCKER_HOST=ssh://docker@127.0.0.1:33535 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XV8vhi7RN4hK/agent.741528" SSH_AGENT_PID="741529" DOCKER_HOST=ssh://docker@127.0.0.1:33535 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.171730982s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XV8vhi7RN4hK/agent.741528" SSH_AGENT_PID="741529" DOCKER_HOST=ssh://docker@127.0.0.1:33535 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XV8vhi7RN4hK/agent.741528" SSH_AGENT_PID="741529" DOCKER_HOST=ssh://docker@127.0.0.1:33535 docker image ls": (1.01112045s)
helpers_test.go:175: Cleaning up "dockerenv-238034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-238034
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-238034: (1.93521108s)
--- PASS: TestDockerEnvContainerd (45.58s)

                                                
                                    
x
+
TestErrorSpam/setup (31.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-495212 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-495212 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-495212 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-495212 --driver=docker  --container-runtime=containerd: (31.554858416s)
--- PASS: TestErrorSpam/setup (31.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 start --dry-run
--- PASS: TestErrorSpam/start (0.67s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 pause
--- PASS: TestErrorSpam/pause (1.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 stop: (1.320335745s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-495212 --log_dir /tmp/nospam-495212 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19649-716050/.minikube/files/etc/test/nested/copy/721428/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-720698 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-720698 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (50.009587491s)
--- PASS: TestFunctional/serial/StartWithProxy (50.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-720698 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-720698 --alsologtostderr -v=8: (6.009530698s)
functional_test.go:663: soft start took 6.010606934s for "functional-720698" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-720698 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-720698 cache add registry.k8s.io/pause:3.1: (1.581317048s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-720698 cache add registry.k8s.io/pause:3.3: (1.368922613s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-720698 cache add registry.k8s.io/pause:latest: (1.270058926s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-720698 /tmp/TestFunctionalserialCacheCmdcacheadd_local2182885219/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 cache add minikube-local-cache-test:functional-720698
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 cache delete minikube-local-cache-test:functional-720698
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-720698
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-720698 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.252042ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-720698 cache reload: (1.083973376s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 kubectl -- --context functional-720698 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-720698 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.94s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-720698 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-720698 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.937435487s)
functional_test.go:761: restart took 43.937531847s for "functional-720698" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-720698 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-720698 logs: (1.733444115s)
--- PASS: TestFunctional/serial/LogsCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 logs --file /tmp/TestFunctionalserialLogsFileCmd1193112491/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-720698 logs --file /tmp/TestFunctionalserialLogsFileCmd1193112491/001/logs.txt: (1.717765208s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.64s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-720698 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-720698
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-720698: exit status 115 (423.954085ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32052 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-720698 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.64s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-720698 config get cpus: exit status 14 (78.758329ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-720698 config get cpus: exit status 14 (64.560311ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-720698 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-720698 --alsologtostderr -v=1] ...
E0916 19:26:18.760192  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:508: unable to kill pid 758458: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-720698 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-720698 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (167.938368ms)

                                                
                                                
-- stdout --
	* [functional-720698] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 19:26:08.403430  758045 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:26:08.403901  758045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:26:08.403917  758045 out.go:358] Setting ErrFile to fd 2...
	I0916 19:26:08.403923  758045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:26:08.404176  758045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
	I0916 19:26:08.404549  758045 out.go:352] Setting JSON to false
	I0916 19:26:08.405469  758045 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11282,"bootTime":1726503487,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0916 19:26:08.405541  758045 start.go:139] virtualization:  
	I0916 19:26:08.407561  758045 out.go:177] * [functional-720698] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0916 19:26:08.409446  758045 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 19:26:08.409561  758045 notify.go:220] Checking for updates...
	I0916 19:26:08.411863  758045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 19:26:08.413187  758045 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	I0916 19:26:08.414427  758045 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	I0916 19:26:08.415646  758045 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0916 19:26:08.416883  758045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 19:26:08.418559  758045 config.go:182] Loaded profile config "functional-720698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 19:26:08.419131  758045 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 19:26:08.441960  758045 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 19:26:08.442100  758045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:26:08.506344  758045 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 19:26:08.496313376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:26:08.506465  758045 docker.go:318] overlay module found
	I0916 19:26:08.508334  758045 out.go:177] * Using the docker driver based on existing profile
	I0916 19:26:08.509742  758045 start.go:297] selected driver: docker
	I0916 19:26:08.509762  758045 start.go:901] validating driver "docker" against &{Name:functional-720698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-720698 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:26:08.509886  758045 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 19:26:08.511438  758045 out.go:201] 
	W0916 19:26:08.512796  758045 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 19:26:08.513969  758045 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-720698 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-720698 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-720698 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (179.49778ms)

                                                
                                                
-- stdout --
	* [functional-720698] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 19:26:08.801345  758161 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:26:08.801489  758161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:26:08.801498  758161 out.go:358] Setting ErrFile to fd 2...
	I0916 19:26:08.801503  758161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:26:08.802784  758161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
	I0916 19:26:08.803188  758161 out.go:352] Setting JSON to false
	I0916 19:26:08.804211  758161 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11282,"bootTime":1726503487,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0916 19:26:08.804293  758161 start.go:139] virtualization:  
	I0916 19:26:08.805880  758161 out.go:177] * [functional-720698] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0916 19:26:08.807968  758161 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 19:26:08.808085  758161 notify.go:220] Checking for updates...
	I0916 19:26:08.810248  758161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 19:26:08.812326  758161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	I0916 19:26:08.814274  758161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	I0916 19:26:08.817005  758161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0916 19:26:08.820085  758161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 19:26:08.822548  758161 config.go:182] Loaded profile config "functional-720698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 19:26:08.823115  758161 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 19:26:08.852605  758161 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 19:26:08.852759  758161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:26:08.910897  758161 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 19:26:08.901339901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:26:08.911016  758161 docker.go:318] overlay module found
	I0916 19:26:08.912827  758161 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0916 19:26:08.914449  758161 start.go:297] selected driver: docker
	I0916 19:26:08.914472  758161 start.go:901] validating driver "docker" against &{Name:functional-720698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-720698 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:26:08.914581  758161 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 19:26:08.917353  758161 out.go:201] 
	W0916 19:26:08.920451  758161 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 19:26:08.922572  758161 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-720698 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-720698 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-8fcfs" [1b05a013-e662-4ee2-a861-82fc930ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-8fcfs" [1b05a013-e662-4ee2-a861-82fc930ff3c3] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004766837s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32049
functional_test.go:1675: http://192.168.49.2:32049: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-8fcfs

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32049
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.79s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [67eaf23d-646a-4a58-88a8-29f050a15c47] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003230138s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-720698 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-720698 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-720698 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-720698 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [031a9b02-c852-4fff-b050-acb9361fb673] Pending
helpers_test.go:344: "sp-pod" [031a9b02-c852-4fff-b050-acb9361fb673] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [031a9b02-c852-4fff-b050-acb9361fb673] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004003014s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-720698 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-720698 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-720698 delete -f testdata/storage-provisioner/pod.yaml: (1.856870679s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-720698 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6660bf66-f056-4b70-8b0a-15ba09f5ba1c] Pending
helpers_test.go:344: "sp-pod" [6660bf66-f056-4b70-8b0a-15ba09f5ba1c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004595295s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-720698 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.89s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh -n functional-720698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 cp functional-720698:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd620709442/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh -n functional-720698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh -n functional-720698 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/721428/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "sudo cat /etc/test/nested/copy/721428/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/721428.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "sudo cat /etc/ssl/certs/721428.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/721428.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "sudo cat /usr/share/ca-certificates/721428.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/7214282.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "sudo cat /etc/ssl/certs/7214282.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/7214282.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "sudo cat /usr/share/ca-certificates/7214282.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-720698 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-720698 ssh "sudo systemctl is-active docker": exit status 1 (376.716472ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-720698 ssh "sudo systemctl is-active crio": exit status 1 (364.207354ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-720698 version -o=json --components: (1.439753436s)
--- PASS: TestFunctional/parallel/Version/components (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-720698 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-720698
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-720698
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-720698 image ls --format short --alsologtostderr:
I0916 19:26:11.818670  758716 out.go:345] Setting OutFile to fd 1 ...
I0916 19:26:11.818810  758716 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:26:11.818821  758716 out.go:358] Setting ErrFile to fd 2...
I0916 19:26:11.818826  758716 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:26:11.819088  758716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
I0916 19:26:11.819799  758716 config.go:182] Loaded profile config "functional-720698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 19:26:11.819923  758716 config.go:182] Loaded profile config "functional-720698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 19:26:11.820404  758716 cli_runner.go:164] Run: docker container inspect functional-720698 --format={{.State.Status}}
I0916 19:26:11.843370  758716 ssh_runner.go:195] Run: systemctl --version
I0916 19:26:11.843492  758716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-720698
I0916 19:26:11.865156  758716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/functional-720698/id_rsa Username:docker}
I0916 19:26:11.959826  758716 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-720698 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| localhost/my-image                          | functional-720698  | sha256:b29c09 | 831kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kicbase/echo-server               | functional-720698  | sha256:ce2d2c | 2.17MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/minikube-local-cache-test | functional-720698  | sha256:4c26ec | 991B   |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-720698 image ls --format table --alsologtostderr:
I0916 19:26:16.144429  759102 out.go:345] Setting OutFile to fd 1 ...
I0916 19:26:16.144655  759102 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:26:16.144683  759102 out.go:358] Setting ErrFile to fd 2...
I0916 19:26:16.144702  759102 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:26:16.144982  759102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
I0916 19:26:16.145760  759102 config.go:182] Loaded profile config "functional-720698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 19:26:16.145938  759102 config.go:182] Loaded profile config "functional-720698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 19:26:16.146499  759102 cli_runner.go:164] Run: docker container inspect functional-720698 --format={{.State.Status}}
I0916 19:26:16.172541  759102 ssh_runner.go:195] Run: systemctl --version
I0916 19:26:16.172594  759102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-720698
I0916 19:26:16.190919  759102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/functional-720698/id_rsa Username:docker}
I0916 19:26:16.293326  759102 ssh_runner.go:195] Run: sudo crictl images --output json
E0916 19:26:18.434922  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:26:18.441896  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:26:18.453345  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:26:18.474824  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:26:18.516281  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:26:18.597776  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
2024/09/16 19:26:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-720698 image ls --format json --alsologtostderr:
[{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:b29c09e80f85e4dd1fd975677603b7280e9b527279c212005169239b73dffc61","repoDigests":[],"repoTags":["localhost/my-image:functional-720698"],"size":"830617"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:24a140c548c075e487e45d
0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-720698"],"size":"2173567"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["regis
try.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:4c26ecd492d
6984e0bbf3a2016c16aa7fc4a3fe5613ed01ce393241cd1001c03","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-720698"],"size":"991"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0
d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-720698 image ls --format json --alsologtostderr:
I0916 19:26:15.889303  759068 out.go:345] Setting OutFile to fd 1 ...
I0916 19:26:15.889503  759068 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:26:15.889531  759068 out.go:358] Setting ErrFile to fd 2...
I0916 19:26:15.889550  759068 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:26:15.889844  759068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
I0916 19:26:15.890573  759068 config.go:182] Loaded profile config "functional-720698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 19:26:15.890800  759068 config.go:182] Loaded profile config "functional-720698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 19:26:15.891563  759068 cli_runner.go:164] Run: docker container inspect functional-720698 --format={{.State.Status}}
I0916 19:26:15.910328  759068 ssh_runner.go:195] Run: systemctl --version
I0916 19:26:15.910378  759068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-720698
I0916 19:26:15.928222  759068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/functional-720698/id_rsa Username:docker}
I0916 19:26:16.024461  759068 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-720698 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-720698
size: "2173567"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:4c26ecd492d6984e0bbf3a2016c16aa7fc4a3fe5613ed01ce393241cd1001c03
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-720698
size: "991"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-720698 image ls --format yaml --alsologtostderr:
I0916 19:26:12.061685  758750 out.go:345] Setting OutFile to fd 1 ...
I0916 19:26:12.061886  758750 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:26:12.061918  758750 out.go:358] Setting ErrFile to fd 2...
I0916 19:26:12.061940  758750 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:26:12.062246  758750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
I0916 19:26:12.063051  758750 config.go:182] Loaded profile config "functional-720698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 19:26:12.063232  758750 config.go:182] Loaded profile config "functional-720698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 19:26:12.064023  758750 cli_runner.go:164] Run: docker container inspect functional-720698 --format={{.State.Status}}
I0916 19:26:12.090378  758750 ssh_runner.go:195] Run: systemctl --version
I0916 19:26:12.090430  758750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-720698
I0916 19:26:12.111646  758750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/functional-720698/id_rsa Username:docker}
I0916 19:26:12.208479  758750 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-720698 ssh pgrep buildkitd: exit status 1 (330.994634ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image build -t localhost/my-image:functional-720698 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-720698 image build -t localhost/my-image:functional-720698 testdata/build --alsologtostderr: (2.974361899s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-720698 image build -t localhost/my-image:functional-720698 testdata/build --alsologtostderr:
I0916 19:26:12.658373  758842 out.go:345] Setting OutFile to fd 1 ...
I0916 19:26:12.659450  758842 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:26:12.659468  758842 out.go:358] Setting ErrFile to fd 2...
I0916 19:26:12.659475  758842 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:26:12.659789  758842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
I0916 19:26:12.660494  758842 config.go:182] Loaded profile config "functional-720698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 19:26:12.662013  758842 config.go:182] Loaded profile config "functional-720698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 19:26:12.662654  758842 cli_runner.go:164] Run: docker container inspect functional-720698 --format={{.State.Status}}
I0916 19:26:12.689431  758842 ssh_runner.go:195] Run: systemctl --version
I0916 19:26:12.689485  758842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-720698
I0916 19:26:12.715986  758842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/functional-720698/id_rsa Username:docker}
I0916 19:26:12.815742  758842 build_images.go:161] Building image from path: /tmp/build.2285558558.tar
I0916 19:26:12.815813  758842 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 19:26:12.825153  758842 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2285558558.tar
I0916 19:26:12.828720  758842 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2285558558.tar: stat -c "%s %y" /var/lib/minikube/build/build.2285558558.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2285558558.tar': No such file or directory
I0916 19:26:12.828753  758842 ssh_runner.go:362] scp /tmp/build.2285558558.tar --> /var/lib/minikube/build/build.2285558558.tar (3072 bytes)
I0916 19:26:12.858260  758842 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2285558558
I0916 19:26:12.868115  758842 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2285558558 -xf /var/lib/minikube/build/build.2285558558.tar
I0916 19:26:12.877938  758842 containerd.go:394] Building image: /var/lib/minikube/build/build.2285558558
I0916 19:26:12.878026  758842 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2285558558 --local dockerfile=/var/lib/minikube/build/build.2285558558 --output type=image,name=localhost/my-image:functional-720698
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:602560abd26a19b433f2fe968ca7128a78f8c7a66c6c104d95778e33e3fb200c
#8 exporting manifest sha256:602560abd26a19b433f2fe968ca7128a78f8c7a66c6c104d95778e33e3fb200c 0.0s done
#8 exporting config sha256:b29c09e80f85e4dd1fd975677603b7280e9b527279c212005169239b73dffc61 0.0s done
#8 naming to localhost/my-image:functional-720698 done
#8 DONE 0.1s
I0916 19:26:15.536445  758842 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2285558558 --local dockerfile=/var/lib/minikube/build/build.2285558558 --output type=image,name=localhost/my-image:functional-720698: (2.658385933s)
I0916 19:26:15.536519  758842 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2285558558
I0916 19:26:15.550292  758842 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2285558558.tar
I0916 19:26:15.560302  758842 build_images.go:217] Built localhost/my-image:functional-720698 from /tmp/build.2285558558.tar
I0916 19:26:15.560381  758842 build_images.go:133] succeeded building to: functional-720698
I0916 19:26:15.560399  758842 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-720698
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image load --daemon kicbase/echo-server:functional-720698 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-720698 image load --daemon kicbase/echo-server:functional-720698 --alsologtostderr: (1.200076824s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image load --daemon kicbase/echo-server:functional-720698 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-720698 image load --daemon kicbase/echo-server:functional-720698 --alsologtostderr: (1.17513361s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-720698 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-720698 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-4jldx" [178f9c7f-1262-4f4b-a413-47c98cdf4a1f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-4jldx" [178f9c7f-1262-4f4b-a413-47c98cdf4a1f] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003156935s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-720698
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image load --daemon kicbase/echo-server:functional-720698 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-720698 image load --daemon kicbase/echo-server:functional-720698 --alsologtostderr: (1.015239879s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image save kicbase/echo-server:functional-720698 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image rm kicbase/echo-server:functional-720698 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-720698
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 image save --daemon kicbase/echo-server:functional-720698 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-720698
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-720698 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-720698 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-720698 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 754931: os: process already finished
helpers_test.go:502: unable to terminate pid 754819: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-720698 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-720698 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-720698 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0160e9a3-7b54-4c99-a5e6-fa7e54188038] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0160e9a3-7b54-4c99-a5e6-fa7e54188038] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004084774s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 service list -o json
functional_test.go:1494: Took "370.864773ms" to run "out/minikube-linux-arm64 -p functional-720698 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31812
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31812
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-720698 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.228.191 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-720698 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "342.84512ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "54.192002ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "318.543569ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "55.713225ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-720698 /tmp/TestFunctionalparallelMountCmdany-port3900752787/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726514756683936344" to /tmp/TestFunctionalparallelMountCmdany-port3900752787/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726514756683936344" to /tmp/TestFunctionalparallelMountCmdany-port3900752787/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726514756683936344" to /tmp/TestFunctionalparallelMountCmdany-port3900752787/001/test-1726514756683936344
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-720698 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (324.50796ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 19:25 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 19:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 19:25 test-1726514756683936344
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh cat /mount-9p/test-1726514756683936344
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-720698 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6851d6c4-6258-4c14-a323-21c7499e4ec8] Pending
helpers_test.go:344: "busybox-mount" [6851d6c4-6258-4c14-a323-21c7499e4ec8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6851d6c4-6258-4c14-a323-21c7499e4ec8] Running
helpers_test.go:344: "busybox-mount" [6851d6c4-6258-4c14-a323-21c7499e4ec8] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6851d6c4-6258-4c14-a323-21c7499e4ec8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00503569s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-720698 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-720698 /tmp/TestFunctionalparallelMountCmdany-port3900752787/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-720698 /tmp/TestFunctionalparallelMountCmdspecific-port3935158223/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-720698 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (306.983827ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-720698 /tmp/TestFunctionalparallelMountCmdspecific-port3935158223/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-720698 ssh "sudo umount -f /mount-9p": exit status 1 (283.056027ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-720698 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-720698 /tmp/TestFunctionalparallelMountCmdspecific-port3935158223/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-720698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup848942582/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-720698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup848942582/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-720698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup848942582/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-720698 ssh "findmnt -T" /mount1: exit status 1 (572.623173ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-720698 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-720698 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-720698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup848942582/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-720698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup848942582/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-720698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup848942582/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-720698
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-720698
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-720698
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (119.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-058135 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0916 19:26:23.567018  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:26:28.689617  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:26:38.930997  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:26:59.412352  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:27:40.373919  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-058135 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m58.348874116s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (119.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (31.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-058135 -- rollout status deployment/busybox: (28.702645483s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-f2b6z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-h2qcc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-n8rxj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-f2b6z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-h2qcc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-n8rxj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-f2b6z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-h2qcc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-n8rxj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (31.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-f2b6z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-f2b6z -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-h2qcc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-h2qcc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-n8rxj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-058135 -- exec busybox-7dff88458-n8rxj -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-058135 -v=7 --alsologtostderr
E0916 19:29:02.296220  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-058135 -v=7 --alsologtostderr: (22.001588867s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-058135 status -v=7 --alsologtostderr: (1.063531991s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-058135 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-058135 status --output json -v=7 --alsologtostderr: (1.035264767s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp testdata/cp-test.txt ha-058135:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3902162404/001/cp-test_ha-058135.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135:/home/docker/cp-test.txt ha-058135-m02:/home/docker/cp-test_ha-058135_ha-058135-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m02 "sudo cat /home/docker/cp-test_ha-058135_ha-058135-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135:/home/docker/cp-test.txt ha-058135-m03:/home/docker/cp-test_ha-058135_ha-058135-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m03 "sudo cat /home/docker/cp-test_ha-058135_ha-058135-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135:/home/docker/cp-test.txt ha-058135-m04:/home/docker/cp-test_ha-058135_ha-058135-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m04 "sudo cat /home/docker/cp-test_ha-058135_ha-058135-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp testdata/cp-test.txt ha-058135-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3902162404/001/cp-test_ha-058135-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135-m02:/home/docker/cp-test.txt ha-058135:/home/docker/cp-test_ha-058135-m02_ha-058135.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135 "sudo cat /home/docker/cp-test_ha-058135-m02_ha-058135.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135-m02:/home/docker/cp-test.txt ha-058135-m03:/home/docker/cp-test_ha-058135-m02_ha-058135-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m03 "sudo cat /home/docker/cp-test_ha-058135-m02_ha-058135-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135-m02:/home/docker/cp-test.txt ha-058135-m04:/home/docker/cp-test_ha-058135-m02_ha-058135-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m04 "sudo cat /home/docker/cp-test_ha-058135-m02_ha-058135-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp testdata/cp-test.txt ha-058135-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3902162404/001/cp-test_ha-058135-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135-m03:/home/docker/cp-test.txt ha-058135:/home/docker/cp-test_ha-058135-m03_ha-058135.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135 "sudo cat /home/docker/cp-test_ha-058135-m03_ha-058135.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135-m03:/home/docker/cp-test.txt ha-058135-m02:/home/docker/cp-test_ha-058135-m03_ha-058135-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m02 "sudo cat /home/docker/cp-test_ha-058135-m03_ha-058135-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135-m03:/home/docker/cp-test.txt ha-058135-m04:/home/docker/cp-test_ha-058135-m03_ha-058135-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m04 "sudo cat /home/docker/cp-test_ha-058135-m03_ha-058135-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp testdata/cp-test.txt ha-058135-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3902162404/001/cp-test_ha-058135-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135-m04:/home/docker/cp-test.txt ha-058135:/home/docker/cp-test_ha-058135-m04_ha-058135.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135 "sudo cat /home/docker/cp-test_ha-058135-m04_ha-058135.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135-m04:/home/docker/cp-test.txt ha-058135-m02:/home/docker/cp-test_ha-058135-m04_ha-058135-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m02 "sudo cat /home/docker/cp-test_ha-058135-m04_ha-058135-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 cp ha-058135-m04:/home/docker/cp-test.txt ha-058135-m03:/home/docker/cp-test_ha-058135-m04_ha-058135-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 ssh -n ha-058135-m03 "sudo cat /home/docker/cp-test_ha-058135-m04_ha-058135-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-058135 node stop m02 -v=7 --alsologtostderr: (12.120375355s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-058135 status -v=7 --alsologtostderr: exit status 7 (706.920784ms)

                                                
                                                
-- stdout --
	ha-058135
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058135-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-058135-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058135-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 19:29:49.285946  775131 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:29:49.286059  775131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:29:49.286064  775131 out.go:358] Setting ErrFile to fd 2...
	I0916 19:29:49.286069  775131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:29:49.286364  775131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
	I0916 19:29:49.286550  775131 out.go:352] Setting JSON to false
	I0916 19:29:49.286568  775131 mustload.go:65] Loading cluster: ha-058135
	I0916 19:29:49.286984  775131 config.go:182] Loaded profile config "ha-058135": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 19:29:49.286993  775131 status.go:255] checking status of ha-058135 ...
	I0916 19:29:49.287681  775131 cli_runner.go:164] Run: docker container inspect ha-058135 --format={{.State.Status}}
	I0916 19:29:49.289204  775131 notify.go:220] Checking for updates...
	I0916 19:29:49.307110  775131 status.go:330] ha-058135 host status = "Running" (err=<nil>)
	I0916 19:29:49.307218  775131 host.go:66] Checking if "ha-058135" exists ...
	I0916 19:29:49.308364  775131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-058135
	I0916 19:29:49.341543  775131 host.go:66] Checking if "ha-058135" exists ...
	I0916 19:29:49.341926  775131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 19:29:49.341976  775131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-058135
	I0916 19:29:49.362801  775131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33550 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/ha-058135/id_rsa Username:docker}
	I0916 19:29:49.460372  775131 ssh_runner.go:195] Run: systemctl --version
	I0916 19:29:49.464978  775131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 19:29:49.476354  775131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:29:49.530121  775131 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-16 19:29:49.519578471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:29:49.530701  775131 kubeconfig.go:125] found "ha-058135" server: "https://192.168.49.254:8443"
	I0916 19:29:49.530741  775131 api_server.go:166] Checking apiserver status ...
	I0916 19:29:49.530793  775131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 19:29:49.542102  775131 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1476/cgroup
	I0916 19:29:49.551783  775131 api_server.go:182] apiserver freezer: "8:freezer:/docker/cb8423f74cba7dd88dbc5da0f38340e48bbe0c22c35b063faa9d67c2fd01c10b/kubepods/burstable/podcc4c9d34bf215d18852c1e57b9a95f81/68b6aaeeaef9dd8a847c85c07542dca2e6e8fe70c9df76f4ccffe1e483871dc9"
	I0916 19:29:49.551857  775131 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cb8423f74cba7dd88dbc5da0f38340e48bbe0c22c35b063faa9d67c2fd01c10b/kubepods/burstable/podcc4c9d34bf215d18852c1e57b9a95f81/68b6aaeeaef9dd8a847c85c07542dca2e6e8fe70c9df76f4ccffe1e483871dc9/freezer.state
	I0916 19:29:49.560600  775131 api_server.go:204] freezer state: "THAWED"
	I0916 19:29:49.560632  775131 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0916 19:29:49.568752  775131 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0916 19:29:49.568786  775131 status.go:422] ha-058135 apiserver status = Running (err=<nil>)
	I0916 19:29:49.568797  775131 status.go:257] ha-058135 status: &{Name:ha-058135 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:29:49.568815  775131 status.go:255] checking status of ha-058135-m02 ...
	I0916 19:29:49.569115  775131 cli_runner.go:164] Run: docker container inspect ha-058135-m02 --format={{.State.Status}}
	I0916 19:29:49.585737  775131 status.go:330] ha-058135-m02 host status = "Stopped" (err=<nil>)
	I0916 19:29:49.585760  775131 status.go:343] host is not running, skipping remaining checks
	I0916 19:29:49.585769  775131 status.go:257] ha-058135-m02 status: &{Name:ha-058135-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:29:49.585790  775131 status.go:255] checking status of ha-058135-m03 ...
	I0916 19:29:49.586096  775131 cli_runner.go:164] Run: docker container inspect ha-058135-m03 --format={{.State.Status}}
	I0916 19:29:49.601308  775131 status.go:330] ha-058135-m03 host status = "Running" (err=<nil>)
	I0916 19:29:49.601334  775131 host.go:66] Checking if "ha-058135-m03" exists ...
	I0916 19:29:49.601637  775131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-058135-m03
	I0916 19:29:49.618538  775131 host.go:66] Checking if "ha-058135-m03" exists ...
	I0916 19:29:49.618880  775131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 19:29:49.618926  775131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-058135-m03
	I0916 19:29:49.635245  775131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33560 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/ha-058135-m03/id_rsa Username:docker}
	I0916 19:29:49.728441  775131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 19:29:49.741898  775131 kubeconfig.go:125] found "ha-058135" server: "https://192.168.49.254:8443"
	I0916 19:29:49.741931  775131 api_server.go:166] Checking apiserver status ...
	I0916 19:29:49.742014  775131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 19:29:49.753943  775131 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1358/cgroup
	I0916 19:29:49.763799  775131 api_server.go:182] apiserver freezer: "8:freezer:/docker/53f1db3d6cbaf91f1eaefb94983532a222744c1a1559dd0476b475a412d4b707/kubepods/burstable/podee1cf04aac5a4fa59587b277680f11af/8bf27e50fc94016c8a904d321ae292e3d6a311248ebd6286deed212ad68c59c8"
	I0916 19:29:49.763874  775131 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/53f1db3d6cbaf91f1eaefb94983532a222744c1a1559dd0476b475a412d4b707/kubepods/burstable/podee1cf04aac5a4fa59587b277680f11af/8bf27e50fc94016c8a904d321ae292e3d6a311248ebd6286deed212ad68c59c8/freezer.state
	I0916 19:29:49.772569  775131 api_server.go:204] freezer state: "THAWED"
	I0916 19:29:49.772598  775131 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0916 19:29:49.780644  775131 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0916 19:29:49.780676  775131 status.go:422] ha-058135-m03 apiserver status = Running (err=<nil>)
	I0916 19:29:49.780688  775131 status.go:257] ha-058135-m03 status: &{Name:ha-058135-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:29:49.780736  775131 status.go:255] checking status of ha-058135-m04 ...
	I0916 19:29:49.781157  775131 cli_runner.go:164] Run: docker container inspect ha-058135-m04 --format={{.State.Status}}
	I0916 19:29:49.800763  775131 status.go:330] ha-058135-m04 host status = "Running" (err=<nil>)
	I0916 19:29:49.800786  775131 host.go:66] Checking if "ha-058135-m04" exists ...
	I0916 19:29:49.801100  775131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-058135-m04
	I0916 19:29:49.818765  775131 host.go:66] Checking if "ha-058135-m04" exists ...
	I0916 19:29:49.819055  775131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 19:29:49.819102  775131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-058135-m04
	I0916 19:29:49.838646  775131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33565 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/ha-058135-m04/id_rsa Username:docker}
	I0916 19:29:49.932851  775131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 19:29:49.944457  775131 status.go:257] ha-058135-m04 status: &{Name:ha-058135-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-058135 node start m02 -v=7 --alsologtostderr: (17.920291698s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-058135 status -v=7 --alsologtostderr: (1.033275791s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (143.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-058135 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-058135 -v=7 --alsologtostderr
E0916 19:30:31.996228  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:30:32.002583  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:30:32.014190  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:30:32.035705  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:30:32.077082  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:30:32.158400  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:30:32.319783  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:30:32.641176  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:30:33.282786  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:30:34.564119  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:30:37.125550  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:30:42.247140  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-058135 -v=7 --alsologtostderr: (37.613875488s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-058135 --wait=true -v=7 --alsologtostderr
E0916 19:30:52.488476  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:31:12.969753  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:31:18.433323  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:31:46.137702  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:31:53.931703  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-058135 --wait=true -v=7 --alsologtostderr: (1m45.904574934s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-058135
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (143.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-058135 node delete m03 -v=7 --alsologtostderr: (8.879869498s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 stop -v=7 --alsologtostderr
E0916 19:33:15.853123  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-058135 stop -v=7 --alsologtostderr: (35.899227506s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-058135 status -v=7 --alsologtostderr: exit status 7 (103.7346ms)

                                                
                                                
-- stdout --
	ha-058135
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-058135-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-058135-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 19:33:20.390546  789425 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:33:20.390761  789425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:33:20.390773  789425 out.go:358] Setting ErrFile to fd 2...
	I0916 19:33:20.390779  789425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:33:20.391038  789425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
	I0916 19:33:20.391220  789425 out.go:352] Setting JSON to false
	I0916 19:33:20.391244  789425 mustload.go:65] Loading cluster: ha-058135
	I0916 19:33:20.391751  789425 config.go:182] Loaded profile config "ha-058135": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 19:33:20.391776  789425 status.go:255] checking status of ha-058135 ...
	I0916 19:33:20.392310  789425 cli_runner.go:164] Run: docker container inspect ha-058135 --format={{.State.Status}}
	I0916 19:33:20.392851  789425 notify.go:220] Checking for updates...
	I0916 19:33:20.408980  789425 status.go:330] ha-058135 host status = "Stopped" (err=<nil>)
	I0916 19:33:20.409002  789425 status.go:343] host is not running, skipping remaining checks
	I0916 19:33:20.409008  789425 status.go:257] ha-058135 status: &{Name:ha-058135 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:33:20.409036  789425 status.go:255] checking status of ha-058135-m02 ...
	I0916 19:33:20.409331  789425 cli_runner.go:164] Run: docker container inspect ha-058135-m02 --format={{.State.Status}}
	I0916 19:33:20.426097  789425 status.go:330] ha-058135-m02 host status = "Stopped" (err=<nil>)
	I0916 19:33:20.426120  789425 status.go:343] host is not running, skipping remaining checks
	I0916 19:33:20.426128  789425 status.go:257] ha-058135-m02 status: &{Name:ha-058135-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:33:20.426149  789425 status.go:255] checking status of ha-058135-m04 ...
	I0916 19:33:20.426456  789425 cli_runner.go:164] Run: docker container inspect ha-058135-m04 --format={{.State.Status}}
	I0916 19:33:20.450908  789425 status.go:330] ha-058135-m04 host status = "Stopped" (err=<nil>)
	I0916 19:33:20.450934  789425 status.go:343] host is not running, skipping remaining checks
	I0916 19:33:20.450948  789425 status.go:257] ha-058135-m04 status: &{Name:ha-058135-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (78.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-058135 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-058135 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.896430461s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (78.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-058135 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-058135 --control-plane -v=7 --alsologtostderr: (45.043762911s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-058135 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-058135 status -v=7 --alsologtostderr: (1.012714475s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-284986 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0916 19:35:59.694868  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:36:18.433836  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-284986 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (51.69492319s)
--- PASS: TestJSONOutput/start/Command (51.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-284986 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-284986 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-284986 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-284986 --output=json --user=testUser: (5.783243492s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-750302 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-750302 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.476434ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"df6772b8-0b59-4155-bbc6-58b40df73d6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-750302] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac2897df-8f1c-44ee-8927-a29bdcb49ebb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19649"}}
	{"specversion":"1.0","id":"2bbd420a-b7d9-4200-949d-644d3e9d5fba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"860c70f3-c5eb-4b05-8331-adfee78a3166","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig"}}
	{"specversion":"1.0","id":"08fcf601-88ab-4b85-b9c2-28c55dfcda32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube"}}
	{"specversion":"1.0","id":"89712771-464d-4b86-a564-0608c53ce3f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7e05befa-c2ff-458a-93e7-4c3602d915f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dee4e008-4188-404e-8e98-6bf9e3cde854","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-750302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-750302
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.13s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-054298 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-054298 --network=: (35.081254027s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-054298" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-054298
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-054298: (2.025916853s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.13s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-773339 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-773339 --network=bridge: (32.982540314s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-773339" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-773339
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-773339: (1.896100869s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.91s)

                                                
                                    
x
+
TestKicExistingNetwork (33.3s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-699579 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-699579 --network=existing-network: (31.161901867s)
helpers_test.go:175: Cleaning up "existing-network-699579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-699579
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-699579: (1.9606314s)
--- PASS: TestKicExistingNetwork (33.30s)

                                                
                                    
x
+
TestKicCustomSubnet (34.39s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-206678 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-206678 --subnet=192.168.60.0/24: (32.282320488s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-206678 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-206678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-206678
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-206678: (2.089047342s)
--- PASS: TestKicCustomSubnet (34.39s)

                                                
                                    
x
+
TestKicStaticIP (34.23s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-146183 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-146183 --static-ip=192.168.200.200: (31.996728689s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-146183 ip
helpers_test.go:175: Cleaning up "static-ip-146183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-146183
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-146183: (2.075714357s)
--- PASS: TestKicStaticIP (34.23s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (65.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-721582 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-721582 --driver=docker  --container-runtime=containerd: (31.492743106s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-724568 --driver=docker  --container-runtime=containerd
E0916 19:40:31.996613  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-724568 --driver=docker  --container-runtime=containerd: (28.832100697s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-721582
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-724568
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-724568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-724568
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-724568: (2.016779163s)
helpers_test.go:175: Cleaning up "first-721582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-721582
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-721582: (1.9665778s)
--- PASS: TestMinikubeProfile (65.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-652573 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-652573 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.175444233s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-652573 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-654641 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-654641 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.916856377s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-654641 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-652573 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-652573 --alsologtostderr -v=5: (1.634870516s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-654641 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-654641
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-654641: (1.200196071s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.51s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-654641
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-654641: (6.509017181s)
--- PASS: TestMountStart/serial/RestartStopped (7.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-654641 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (68.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-209859 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0916 19:41:18.433851  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-209859 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m7.640556933s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (68.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (16.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-209859 -- rollout status deployment/busybox: (15.047858906s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- exec busybox-7dff88458-9htql -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- exec busybox-7dff88458-l89z4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- exec busybox-7dff88458-9htql -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- exec busybox-7dff88458-l89z4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- exec busybox-7dff88458-9htql -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- exec busybox-7dff88458-l89z4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (16.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- exec busybox-7dff88458-9htql -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- exec busybox-7dff88458-9htql -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- exec busybox-7dff88458-l89z4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-209859 -- exec busybox-7dff88458-l89z4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-209859 -v 3 --alsologtostderr
E0916 19:42:41.499714  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-209859 -v 3 --alsologtostderr: (18.234755242s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.90s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-209859 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 cp testdata/cp-test.txt multinode-209859:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 cp multinode-209859:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile982170147/001/cp-test_multinode-209859.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 cp multinode-209859:/home/docker/cp-test.txt multinode-209859-m02:/home/docker/cp-test_multinode-209859_multinode-209859-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859-m02 "sudo cat /home/docker/cp-test_multinode-209859_multinode-209859-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 cp multinode-209859:/home/docker/cp-test.txt multinode-209859-m03:/home/docker/cp-test_multinode-209859_multinode-209859-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859-m03 "sudo cat /home/docker/cp-test_multinode-209859_multinode-209859-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 cp testdata/cp-test.txt multinode-209859-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 cp multinode-209859-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile982170147/001/cp-test_multinode-209859-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 cp multinode-209859-m02:/home/docker/cp-test.txt multinode-209859:/home/docker/cp-test_multinode-209859-m02_multinode-209859.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859 "sudo cat /home/docker/cp-test_multinode-209859-m02_multinode-209859.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 cp multinode-209859-m02:/home/docker/cp-test.txt multinode-209859-m03:/home/docker/cp-test_multinode-209859-m02_multinode-209859-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859-m03 "sudo cat /home/docker/cp-test_multinode-209859-m02_multinode-209859-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 cp testdata/cp-test.txt multinode-209859-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 cp multinode-209859-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile982170147/001/cp-test_multinode-209859-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 cp multinode-209859-m03:/home/docker/cp-test.txt multinode-209859:/home/docker/cp-test_multinode-209859-m03_multinode-209859.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859 "sudo cat /home/docker/cp-test_multinode-209859-m03_multinode-209859.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 cp multinode-209859-m03:/home/docker/cp-test.txt multinode-209859-m02:/home/docker/cp-test_multinode-209859-m03_multinode-209859-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 ssh -n multinode-209859-m02 "sudo cat /home/docker/cp-test_multinode-209859-m03_multinode-209859-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-209859 node stop m03: (1.216642608s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-209859 status: exit status 7 (530.977177ms)

                                                
                                                
-- stdout --
	multinode-209859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-209859-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-209859-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-209859 status --alsologtostderr: exit status 7 (517.626513ms)

                                                
                                                
-- stdout --
	multinode-209859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-209859-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-209859-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 19:43:02.331385  842559 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:43:02.331550  842559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:43:02.331559  842559 out.go:358] Setting ErrFile to fd 2...
	I0916 19:43:02.331565  842559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:43:02.331810  842559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
	I0916 19:43:02.332014  842559 out.go:352] Setting JSON to false
	I0916 19:43:02.332051  842559 mustload.go:65] Loading cluster: multinode-209859
	I0916 19:43:02.332126  842559 notify.go:220] Checking for updates...
	I0916 19:43:02.333020  842559 config.go:182] Loaded profile config "multinode-209859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 19:43:02.333045  842559 status.go:255] checking status of multinode-209859 ...
	I0916 19:43:02.333581  842559 cli_runner.go:164] Run: docker container inspect multinode-209859 --format={{.State.Status}}
	I0916 19:43:02.350840  842559 status.go:330] multinode-209859 host status = "Running" (err=<nil>)
	I0916 19:43:02.350866  842559 host.go:66] Checking if "multinode-209859" exists ...
	I0916 19:43:02.351175  842559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-209859
	I0916 19:43:02.372483  842559 host.go:66] Checking if "multinode-209859" exists ...
	I0916 19:43:02.372776  842559 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 19:43:02.372829  842559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209859
	I0916 19:43:02.390303  842559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33672 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/multinode-209859/id_rsa Username:docker}
	I0916 19:43:02.490915  842559 ssh_runner.go:195] Run: systemctl --version
	I0916 19:43:02.495212  842559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 19:43:02.507101  842559 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:43:02.580617  842559 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-16 19:43:02.569620913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:43:02.581210  842559 kubeconfig.go:125] found "multinode-209859" server: "https://192.168.67.2:8443"
	I0916 19:43:02.581249  842559 api_server.go:166] Checking apiserver status ...
	I0916 19:43:02.581293  842559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 19:43:02.592866  842559 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	I0916 19:43:02.602656  842559 api_server.go:182] apiserver freezer: "8:freezer:/docker/3ad3654d95590370db5151bca8679fa40fc1c902b8ed400e49ad2fa5047a2ca4/kubepods/burstable/podc3698a99c0f2a877fcbdbd625f06c8ce/21347fb61670a73ddbbfafb5a7f21c40e01c5deaa3fe19a4860aefe2236b5f57"
	I0916 19:43:02.602733  842559 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3ad3654d95590370db5151bca8679fa40fc1c902b8ed400e49ad2fa5047a2ca4/kubepods/burstable/podc3698a99c0f2a877fcbdbd625f06c8ce/21347fb61670a73ddbbfafb5a7f21c40e01c5deaa3fe19a4860aefe2236b5f57/freezer.state
	I0916 19:43:02.611900  842559 api_server.go:204] freezer state: "THAWED"
	I0916 19:43:02.611932  842559 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0916 19:43:02.619737  842559 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0916 19:43:02.619763  842559 status.go:422] multinode-209859 apiserver status = Running (err=<nil>)
	I0916 19:43:02.619774  842559 status.go:257] multinode-209859 status: &{Name:multinode-209859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:43:02.619791  842559 status.go:255] checking status of multinode-209859-m02 ...
	I0916 19:43:02.620125  842559 cli_runner.go:164] Run: docker container inspect multinode-209859-m02 --format={{.State.Status}}
	I0916 19:43:02.636808  842559 status.go:330] multinode-209859-m02 host status = "Running" (err=<nil>)
	I0916 19:43:02.636835  842559 host.go:66] Checking if "multinode-209859-m02" exists ...
	I0916 19:43:02.637142  842559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-209859-m02
	I0916 19:43:02.655455  842559 host.go:66] Checking if "multinode-209859-m02" exists ...
	I0916 19:43:02.655808  842559 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 19:43:02.655854  842559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209859-m02
	I0916 19:43:02.673041  842559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33677 SSHKeyPath:/home/jenkins/minikube-integration/19649-716050/.minikube/machines/multinode-209859-m02/id_rsa Username:docker}
	I0916 19:43:02.768359  842559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 19:43:02.779908  842559 status.go:257] multinode-209859-m02 status: &{Name:multinode-209859-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:43:02.779944  842559 status.go:255] checking status of multinode-209859-m03 ...
	I0916 19:43:02.780258  842559 cli_runner.go:164] Run: docker container inspect multinode-209859-m03 --format={{.State.Status}}
	I0916 19:43:02.796666  842559 status.go:330] multinode-209859-m03 host status = "Stopped" (err=<nil>)
	I0916 19:43:02.796689  842559 status.go:343] host is not running, skipping remaining checks
	I0916 19:43:02.796697  842559 status.go:257] multinode-209859-m03 status: &{Name:multinode-209859-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-209859 node start m03 -v=7 --alsologtostderr: (8.912609818s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (113.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-209859
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-209859
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-209859: (25.015558307s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-209859 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-209859 --wait=true -v=8 --alsologtostderr: (1m28.850974095s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-209859
--- PASS: TestMultiNode/serial/RestartKeepsNodes (113.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-209859 node delete m03: (4.791654855s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 stop
E0916 19:45:31.996249  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-209859 stop: (23.856272375s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-209859 status: exit status 7 (91.359821ms)

                                                
                                                
-- stdout --
	multinode-209859
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-209859-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-209859 status --alsologtostderr: exit status 7 (90.637098ms)

                                                
                                                
-- stdout --
	multinode-209859
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-209859-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 19:45:35.954535  851027 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:45:35.954873  851027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:45:35.954888  851027 out.go:358] Setting ErrFile to fd 2...
	I0916 19:45:35.954895  851027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:45:35.955148  851027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
	I0916 19:45:35.955411  851027 out.go:352] Setting JSON to false
	I0916 19:45:35.955453  851027 mustload.go:65] Loading cluster: multinode-209859
	I0916 19:45:35.955530  851027 notify.go:220] Checking for updates...
	I0916 19:45:35.956462  851027 config.go:182] Loaded profile config "multinode-209859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 19:45:35.956486  851027 status.go:255] checking status of multinode-209859 ...
	I0916 19:45:35.957021  851027 cli_runner.go:164] Run: docker container inspect multinode-209859 --format={{.State.Status}}
	I0916 19:45:35.974553  851027 status.go:330] multinode-209859 host status = "Stopped" (err=<nil>)
	I0916 19:45:35.974578  851027 status.go:343] host is not running, skipping remaining checks
	I0916 19:45:35.974586  851027 status.go:257] multinode-209859 status: &{Name:multinode-209859 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:45:35.974622  851027 status.go:255] checking status of multinode-209859-m02 ...
	I0916 19:45:35.974950  851027 cli_runner.go:164] Run: docker container inspect multinode-209859-m02 --format={{.State.Status}}
	I0916 19:45:35.998702  851027 status.go:330] multinode-209859-m02 host status = "Stopped" (err=<nil>)
	I0916 19:45:35.998727  851027 status.go:343] host is not running, skipping remaining checks
	I0916 19:45:35.998735  851027 status.go:257] multinode-209859-m02 status: &{Name:multinode-209859-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-209859 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0916 19:46:18.433225  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-209859 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.277117358s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-209859 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.95s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-209859
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-209859-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-209859-m02 --driver=docker  --container-runtime=containerd: exit status 14 (92.697512ms)

                                                
                                                
-- stdout --
	* [multinode-209859-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-209859-m02' is duplicated with machine name 'multinode-209859-m02' in profile 'multinode-209859'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-209859-m03 --driver=docker  --container-runtime=containerd
E0916 19:46:55.056372  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-209859-m03 --driver=docker  --container-runtime=containerd: (33.51109457s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-209859
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-209859: exit status 80 (312.737963ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-209859 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-209859-m03 already exists in multinode-209859-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-209859-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-209859-m03: (1.941484876s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.92s)

                                                
                                    
x
+
TestPreload (116.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-348736 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-348736 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m18.286525632s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-348736 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-348736 image pull gcr.io/k8s-minikube/busybox: (1.930866472s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-348736
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-348736: (12.053528258s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-348736 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-348736 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.042520165s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-348736 image list
helpers_test.go:175: Cleaning up "test-preload-348736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-348736
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-348736: (2.516419497s)
--- PASS: TestPreload (116.13s)

                                                
                                    
x
+
TestScheduledStopUnix (107.83s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-848478 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-848478 --memory=2048 --driver=docker  --container-runtime=containerd: (31.289224992s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-848478 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-848478 -n scheduled-stop-848478
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-848478 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-848478 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-848478 -n scheduled-stop-848478
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-848478
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-848478 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0916 19:50:31.998441  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-848478
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-848478: exit status 7 (70.592436ms)

                                                
                                                
-- stdout --
	scheduled-stop-848478
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-848478 -n scheduled-stop-848478
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-848478 -n scheduled-stop-848478: exit status 7 (67.862047ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-848478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-848478
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-848478: (4.947756851s)
--- PASS: TestScheduledStopUnix (107.83s)

                                                
                                    
x
+
TestInsufficientStorage (10.67s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-304078 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-304078 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.115759024s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db7f7869-fdde-4ce1-82db-0001ce48989e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-304078] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c732852-a5f3-4071-a1bc-a2459af20741","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19649"}}
	{"specversion":"1.0","id":"2becd69d-4103-4826-9120-809189880977","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"42914420-451f-49e6-8d20-df0ad478657a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig"}}
	{"specversion":"1.0","id":"6b404f6d-3930-4c90-8efd-efba9e401cb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube"}}
	{"specversion":"1.0","id":"fdb151a6-0dce-44b4-a833-06f534a74c62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"94f2cfc5-0608-4bfc-8280-5cbe4ef0760b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"21b3eae5-bf83-44ba-9b94-ed4c76c0fd43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5b5976f2-88eb-47d6-ab4f-6b59ffdf61ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7a1ad0bb-43be-4c74-adc8-3e904940d821","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7327b86c-ce7a-4642-8ed1-3be3e41e0473","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f4c0a16e-4bee-4eb4-8cf6-eb7a33a38eac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-304078\" primary control-plane node in \"insufficient-storage-304078\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"12006ed3-c7c8-43db-acde-0d21d747cdf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726481311-19649 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b13edc1c-1b9f-4a1b-b53d-8fd4de3ba987","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f7576c84-8868-4954-ab91-7ff3676582d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-304078 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-304078 --output=json --layout=cluster: exit status 7 (294.76422ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-304078","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-304078","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 19:51:01.178605  869346 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-304078" does not appear in /home/jenkins/minikube-integration/19649-716050/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-304078 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-304078 --output=json --layout=cluster: exit status 7 (326.324146ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-304078","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-304078","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 19:51:01.510172  869406 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-304078" does not appear in /home/jenkins/minikube-integration/19649-716050/kubeconfig
	E0916 19:51:01.520879  869406 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/insufficient-storage-304078/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-304078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-304078
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-304078: (1.927392937s)
--- PASS: TestInsufficientStorage (10.67s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (78.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.463296458 start -p running-upgrade-787279 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0916 19:56:18.433812  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.463296458 start -p running-upgrade-787279 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.365774423s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-787279 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-787279 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.152642874s)
helpers_test.go:175: Cleaning up "running-upgrade-787279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-787279
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-787279: (2.524603533s)
--- PASS: TestRunningBinaryUpgrade (78.93s)

                                                
                                    
x
+
TestKubernetesUpgrade (353.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-729623 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-729623 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.452887184s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-729623
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-729623: (1.33005907s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-729623 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-729623 status --format={{.Host}}: exit status 7 (90.02588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-729623 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-729623 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m38.65893443s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-729623 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-729623 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-729623 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (99.006756ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-729623] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-729623
	    minikube start -p kubernetes-upgrade-729623 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7296232 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-729623 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-729623 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-729623 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.110031104s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-729623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-729623
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-729623: (2.477100354s)
--- PASS: TestKubernetesUpgrade (353.33s)

                                                
                                    
x
+
TestMissingContainerUpgrade (177.39s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1909049327 start -p missing-upgrade-283426 --memory=2200 --driver=docker  --container-runtime=containerd
E0916 19:51:18.434155  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1909049327 start -p missing-upgrade-283426 --memory=2200 --driver=docker  --container-runtime=containerd: (1m31.211865124s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-283426
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-283426: (10.313709523s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-283426
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-283426 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-283426 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m12.696114184s)
helpers_test.go:175: Cleaning up "missing-upgrade-283426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-283426
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-283426: (2.421949764s)
--- PASS: TestMissingContainerUpgrade (177.39s)

                                                
                                    
x
+
TestPause/serial/Start (98.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-607765 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-607765 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m38.460672027s)
--- PASS: TestPause/serial/Start (98.46s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.26s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-607765 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-607765 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.241133263s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.26s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-607765 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-607765 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-607765 --output=json --layout=cluster: exit status 2 (310.974419ms)

                                                
                                                
-- stdout --
	{"Name":"pause-607765","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-607765","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-607765 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-607765 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.49s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-607765 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-607765 --alsologtostderr -v=5: (2.4849183s)
--- PASS: TestPause/serial/DeletePaused (2.49s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-607765
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-607765: exit status 1 (17.763665ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-607765: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.384759524 start -p stopped-upgrade-091082 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.384759524 start -p stopped-upgrade-091082 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (40.849952947s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.384759524 -p stopped-upgrade-091082 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.384759524 -p stopped-upgrade-091082 stop: (19.97356662s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-091082 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0916 19:55:31.996684  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-091082 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.093021132s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-091082
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-091082: (1.171238271s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-846883 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-846883 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (79.898981ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-846883] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-846883 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-846883 --driver=docker  --container-runtime=containerd: (41.878942867s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-846883 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-846883 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-846883 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.649614685s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-846883 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-846883 status -o json: exit status 2 (348.741982ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-846883","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-846883
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-846883: (2.250223654s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-190397 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-190397 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (188.658627ms)

                                                
                                                
-- stdout --
	* [false-190397] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 19:58:50.345030  908793 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:58:50.345216  908793 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:58:50.345229  908793 out.go:358] Setting ErrFile to fd 2...
	I0916 19:58:50.345235  908793 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:58:50.345500  908793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-716050/.minikube/bin
	I0916 19:58:50.345948  908793 out.go:352] Setting JSON to false
	I0916 19:58:50.347019  908793 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":13244,"bootTime":1726503487,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0916 19:58:50.347105  908793 start.go:139] virtualization:  
	I0916 19:58:50.350486  908793 out.go:177] * [false-190397] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0916 19:58:50.353874  908793 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 19:58:50.353948  908793 notify.go:220] Checking for updates...
	I0916 19:58:50.359386  908793 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 19:58:50.362051  908793 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-716050/kubeconfig
	I0916 19:58:50.365005  908793 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-716050/.minikube
	I0916 19:58:50.367837  908793 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0916 19:58:50.370510  908793 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 19:58:50.373645  908793 config.go:182] Loaded profile config "NoKubernetes-846883": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0916 19:58:50.373753  908793 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 19:58:50.394317  908793 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 19:58:50.394452  908793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:58:50.462083  908793 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 19:58:50.445912831 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:58:50.462215  908793 docker.go:318] overlay module found
	I0916 19:58:50.466725  908793 out.go:177] * Using the docker driver based on user configuration
	I0916 19:58:50.470676  908793 start.go:297] selected driver: docker
	I0916 19:58:50.470698  908793 start.go:901] validating driver "docker" against <nil>
	I0916 19:58:50.470714  908793 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 19:58:50.474556  908793 out.go:201] 
	W0916 19:58:50.477816  908793 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0916 19:58:50.481024  908793 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-190397 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-190397

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-190397

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-190397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-190397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-190397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-190397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-190397

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-190397

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-190397

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-190397

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-190397

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-190397" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-190397" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19649-716050/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 16 Sep 2024 19:58:38 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-846883
contexts:
- context:
cluster: NoKubernetes-846883
extensions:
- extension:
last-update: Mon, 16 Sep 2024 19:58:38 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: NoKubernetes-846883
name: NoKubernetes-846883
current-context: ""
kind: Config
preferences: {}
users:
- name: NoKubernetes-846883
user:
client-certificate: /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/NoKubernetes-846883/client.crt
client-key: /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/NoKubernetes-846883/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-190397

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190397"

                                                
                                                
----------------------- debugLogs end: false-190397 [took: 3.258452788s] --------------------------------
helpers_test.go:175: Cleaning up "false-190397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-190397
--- PASS: TestNetworkPlugins/group/false (3.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-846883 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-846883 --no-kubernetes --driver=docker  --container-runtime=containerd: (10.069704997s)
--- PASS: TestNoKubernetes/serial/Start (10.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-846883 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-846883 "sudo systemctl is-active --quiet service kubelet": exit status 1 (339.190195ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-846883
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-846883: (1.255333787s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-846883 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-846883 --driver=docker  --container-runtime=containerd: (7.00834757s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-846883 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-846883 "sudo systemctl is-active --quiet service kubelet": exit status 1 (347.167534ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (144.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-908284 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0916 20:01:18.433351  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-908284 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m24.11155418s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (144.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (90.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-762419 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-762419 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m30.817976344s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (90.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-908284 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [eebcbc94-a425-4967-9715-391b073b62c5] Pending
helpers_test.go:344: "busybox" [eebcbc94-a425-4967-9715-391b073b62c5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [eebcbc94-a425-4967-9715-391b073b62c5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00484812s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-908284 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-908284 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-908284 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.439313633s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-908284 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-908284 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-908284 --alsologtostderr -v=3: (12.186155595s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-908284 -n old-k8s-version-908284
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-908284 -n old-k8s-version-908284: exit status 7 (70.45442ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-908284 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-762419 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3d30e1ec-1c1f-4b50-b34b-f3592368b33f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3d30e1ec-1c1f-4b50-b34b-f3592368b33f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.0034042s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-762419 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-762419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-762419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.058526695s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-762419 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-762419 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-762419 --alsologtostderr -v=3: (12.07843161s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-762419 -n default-k8s-diff-port-762419
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-762419 -n default-k8s-diff-port-762419: exit status 7 (67.230331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-762419 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-762419 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0916 20:05:31.996168  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:06:18.433290  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-762419 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.732941749s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-762419 -n default-k8s-diff-port-762419
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-md6s4" [254edbf8-7f55-44cd-a52b-823e9ea77d7f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00351725s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-md6s4" [254edbf8-7f55-44cd-a52b-823e9ea77d7f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006150635s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-762419 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-762419 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-762419 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-762419 -n default-k8s-diff-port-762419
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-762419 -n default-k8s-diff-port-762419: exit status 2 (418.022839ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-762419 -n default-k8s-diff-port-762419
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-762419 -n default-k8s-diff-port-762419: exit status 2 (412.368255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-762419 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-762419 -n default-k8s-diff-port-762419
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-762419 -n default-k8s-diff-port-762419
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (93.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-931636 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-931636 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m33.106402524s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (93.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-tvt7k" [ee6b42aa-4074-4f2d-b1a6-40ee37c2da19] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003943775s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-tvt7k" [ee6b42aa-4074-4f2d-b1a6-40ee37c2da19] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00513475s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-908284 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-908284 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-908284 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-908284 -n old-k8s-version-908284
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-908284 -n old-k8s-version-908284: exit status 2 (512.346855ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-908284 -n old-k8s-version-908284
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-908284 -n old-k8s-version-908284: exit status 2 (620.265818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-908284 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-908284 --alsologtostderr -v=1: (1.159297011s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-908284 -n old-k8s-version-908284
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-908284 -n old-k8s-version-908284
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (59.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-658603 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0916 20:10:31.996238  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-658603 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (59.368677708s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (59.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-931636 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9fc98b59-af04-40c0-9587-bfffedc3f2de] Pending
helpers_test.go:344: "busybox" [9fc98b59-af04-40c0-9587-bfffedc3f2de] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9fc98b59-af04-40c0-9587-bfffedc3f2de] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004030351s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-931636 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-658603 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7eb84106-bcda-4b13-ba12-1a452949660e] Pending
helpers_test.go:344: "busybox" [7eb84106-bcda-4b13-ba12-1a452949660e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7eb84106-bcda-4b13-ba12-1a452949660e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004672818s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-658603 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-931636 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-931636 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.105824319s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-931636 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-931636 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-931636 --alsologtostderr -v=3: (12.085058619s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-658603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-658603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.004232876s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-658603 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-658603 --alsologtostderr -v=3
E0916 20:11:18.434212  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-658603 --alsologtostderr -v=3: (12.097555486s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-931636 -n embed-certs-931636
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-931636 -n embed-certs-931636: exit status 7 (64.990356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-931636 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-931636 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-931636 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.169481237s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-931636 -n embed-certs-931636
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-658603 -n no-preload-658603
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-658603 -n no-preload-658603: exit status 7 (95.050955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-658603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (272.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-658603 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0916 20:13:01.430302  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:01.436684  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:01.448200  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:01.469608  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:01.511580  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:01.593434  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:01.755009  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:02.077338  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:02.719149  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:04.000838  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:06.562432  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:11.683979  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:21.925908  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:42.407910  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:13.253477  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:13.259907  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:13.271361  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:13.292763  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:13.334179  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:13.415633  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:13.577107  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:13.898810  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:14.540144  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:15.821509  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:18.383663  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:23.369776  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:23.505376  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:33.746767  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:54.228978  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:31.996219  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:35.190593  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:45.291481  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-658603 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m31.962595655s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-658603 -n no-preload-658603
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (272.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mp7h4" [2649a427-8ba4-41a0-9d04-1d73fae6db31] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003932887s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mp7h4" [2649a427-8ba4-41a0-9d04-1d73fae6db31] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004737202s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-931636 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-931636 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-931636 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-931636 --alsologtostderr -v=1: (1.004655931s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-931636 -n embed-certs-931636
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-931636 -n embed-certs-931636: exit status 2 (347.994475ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-931636 -n embed-certs-931636
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-931636 -n embed-certs-931636: exit status 2 (339.766541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-931636 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-931636 -n embed-certs-931636
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-931636 -n embed-certs-931636
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-m4rfw" [52d753bc-0e87-480d-b954-fe8f18041923] Running
E0916 20:16:01.505768  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/addons-350900/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004112402s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-116820 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-116820 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (41.148512883s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-m4rfw" [52d753bc-0e87-480d-b954-fe8f18041923] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003953095s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-658603 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-658603 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-658603 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-658603 --alsologtostderr -v=1: (1.603951081s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-658603 -n no-preload-658603
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-658603 -n no-preload-658603: exit status 2 (480.695969ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-658603 -n no-preload-658603
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-658603 -n no-preload-658603: exit status 2 (443.966209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-658603 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-658603 -n no-preload-658603
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-658603 -n no-preload-658603
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (95.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-190397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-190397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m35.272769587s)
--- PASS: TestNetworkPlugins/group/auto/Start (95.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-116820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-116820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.442747759s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-116820 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-116820 --alsologtostderr -v=3: (1.33505537s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-116820 -n newest-cni-116820
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-116820 -n newest-cni-116820: exit status 7 (92.176381ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-116820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-116820 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0916 20:16:57.111894  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-116820 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (19.653174502s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-116820 -n newest-cni-116820
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-116820 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-116820 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-116820 -n newest-cni-116820
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-116820 -n newest-cni-116820: exit status 2 (315.256278ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-116820 -n newest-cni-116820
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-116820 -n newest-cni-116820: exit status 2 (325.564743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-116820 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-116820 -n newest-cni-116820
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-116820 -n newest-cni-116820
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-190397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-190397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m28.678534442s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-190397 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-190397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fpsgs" [2fcf0749-311f-47b8-a5fa-b63ef7cf5018] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fpsgs" [2fcf0749-311f-47b8-a5fa-b63ef7cf5018] Running
E0916 20:18:01.430093  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003798298s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-190397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-190397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-190397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-190397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0916 20:18:29.132981  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-190397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m4.606837501s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-drd8z" [1dbf8139-fdf4-48eb-8774-df3f34228a5b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003676734s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-190397 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-190397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hjn29" [fa7ef70a-698b-4d66-bf04-d95ed5d37bb9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hjn29" [fa7ef70a-698b-4d66-bf04-d95ed5d37bb9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004280187s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-190397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-190397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-190397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-190397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-190397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (55.332267592s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-n6wqq" [2276a87b-02c0-4a22-be2a-883964c316c9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005603585s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-190397 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-190397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mw774" [b33a3113-7699-4d81-9dff-491b1bf5a948] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0916 20:19:40.953977  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/default-k8s-diff-port-762419/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-mw774" [b33a3113-7699-4d81-9dff-491b1bf5a948] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004326105s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-190397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-190397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-190397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (48.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-190397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0916 20:20:15.060321  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-190397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (48.388539507s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (48.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-190397 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-190397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mn22m" [3fe67416-24f5-4539-b369-298410c53129] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mn22m" [3fe67416-24f5-4539-b369-298410c53129] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004854153s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-190397 exec deployment/netcat -- nslookup kubernetes.default
E0916 20:20:31.995818  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/functional-720698/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-190397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-190397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-190397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-190397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (51.002070405s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-190397 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-190397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vrrzt" [8ce0819b-4787-4bd4-9889-2c44bf5b3c47] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0916 20:21:05.319889  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/no-preload-658603/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:21:05.326193  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/no-preload-658603/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:21:05.337620  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/no-preload-658603/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:21:05.358963  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/no-preload-658603/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:21:05.400321  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/no-preload-658603/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:21:05.482021  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/no-preload-658603/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:21:05.643983  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/no-preload-658603/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:21:05.965587  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/no-preload-658603/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:21:06.607352  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/no-preload-658603/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-vrrzt" [8ce0819b-4787-4bd4-9889-2c44bf5b3c47] Running
E0916 20:21:07.889332  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/no-preload-658603/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:21:10.450784  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/no-preload-658603/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003692699s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-190397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-190397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-190397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-190397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0916 20:21:46.297001  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/no-preload-658603/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-190397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m13.390647655s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tmljz" [5f64d69c-41fe-4b08-b869-4c5f4013c341] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005060924s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-190397 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-190397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9p54s" [40710157-da6e-42e1-aad0-c08770d8c0b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9p54s" [40710157-da6e-42e1-aad0-c08770d8c0b4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.002952518s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-190397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-190397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-190397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-190397 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-190397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z2w7f" [ae2aed9e-0d5d-4fa0-84a5-880d31daf1ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z2w7f" [ae2aed9e-0d5d-4fa0-84a5-880d31daf1ad] Running
E0916 20:22:56.572760  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/auto-190397/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:22:56.579166  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/auto-190397/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:22:56.590560  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/auto-190397/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:22:56.611923  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/auto-190397/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:22:56.653434  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/auto-190397/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:22:56.734895  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/auto-190397/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:22:56.896837  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/auto-190397/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:22:57.218168  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/auto-190397/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:22:57.859840  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/auto-190397/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:22:59.141839  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/auto-190397/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:23:01.430412  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/old-k8s-version-908284/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:23:01.703958  721428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/auto-190397/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00394706s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-190397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-190397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-190397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-112124 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-112124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-112124
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-547240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-547240
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-190397 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-190397

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-190397

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-190397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-190397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-190397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-190397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-190397

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-190397

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-190397

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-190397

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-190397

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-190397" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-190397" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19649-716050/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 16 Sep 2024 19:58:38 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-846883
contexts:
- context:
cluster: NoKubernetes-846883
extensions:
- extension:
last-update: Mon, 16 Sep 2024 19:58:38 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: NoKubernetes-846883
name: NoKubernetes-846883
current-context: ""
kind: Config
preferences: {}
users:
- name: NoKubernetes-846883
user:
client-certificate: /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/NoKubernetes-846883/client.crt
client-key: /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/NoKubernetes-846883/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-190397

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190397"

                                                
                                                
----------------------- debugLogs end: kubenet-190397 [took: 3.279879227s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-190397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-190397
--- SKIP: TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-190397 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-190397" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19649-716050/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 16 Sep 2024 19:58:38 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-846883
contexts:
- context:
cluster: NoKubernetes-846883
extensions:
- extension:
last-update: Mon, 16 Sep 2024 19:58:38 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: NoKubernetes-846883
name: NoKubernetes-846883
current-context: ""
kind: Config
preferences: {}
users:
- name: NoKubernetes-846883
user:
client-certificate: /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/NoKubernetes-846883/client.crt
client-key: /home/jenkins/minikube-integration/19649-716050/.minikube/profiles/NoKubernetes-846883/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-190397

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-190397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190397"

                                                
                                                
----------------------- debugLogs end: cilium-190397 [took: 4.208737736s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-190397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-190397
--- SKIP: TestNetworkPlugins/group/cilium (4.42s)

                                                
                                    
Copied to clipboard