Test Report: Docker_Linux_containerd_arm64 19423

                    
                      74b5ac7e1cfb7233a98e35daf2ce49e3acb00be2:2024-08-19:35861
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 201.01
302 TestStartStop/group/old-k8s-version/serial/SecondStart 384.83
x
+
TestAddons/serial/Volcano (201.01s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 47.619708ms
addons_test.go:897: volcano-scheduler stabilized in 50.249993ms
addons_test.go:913: volcano-controller stabilized in 50.342783ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-f6rfz" [dd6ebc47-b0cc-405a-91f9-14d8d8a0fd42] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004900356s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-ffrm8" [421a5750-7049-4fd6-9250-05a4c2106443] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.003538999s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-kb97v" [c14ca8f3-a66d-45f7-af8e-ba2dc1a075d5] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003148245s
addons_test.go:932: (dbg) Run:  kubectl --context addons-069800 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-069800 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-069800 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ce2cf056-f933-420d-8684-17239ac3017a] Pending
helpers_test.go:344: "test-job-nginx-0" [ce2cf056-f933-420d-8684-17239ac3017a] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-069800 -n addons-069800
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-19 20:27:13.570982981 +0000 UTC m=+370.530387371
addons_test.go:964: (dbg) Run:  kubectl --context addons-069800 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-069800 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-f8991c0b-e82a-4e06-a1b9-f1b3313e8fa0
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4vxq6 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-4vxq6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-069800 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-069800 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-069800
helpers_test.go:235: (dbg) docker inspect addons-069800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c49d594c0fa607eb96b9e342db7e13e57454fd0e707111aaef3d7db7de491cb1",
	        "Created": "2024-08-19T20:21:47.774992587Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1146288,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T20:21:47.887702013Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/c49d594c0fa607eb96b9e342db7e13e57454fd0e707111aaef3d7db7de491cb1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c49d594c0fa607eb96b9e342db7e13e57454fd0e707111aaef3d7db7de491cb1/hostname",
	        "HostsPath": "/var/lib/docker/containers/c49d594c0fa607eb96b9e342db7e13e57454fd0e707111aaef3d7db7de491cb1/hosts",
	        "LogPath": "/var/lib/docker/containers/c49d594c0fa607eb96b9e342db7e13e57454fd0e707111aaef3d7db7de491cb1/c49d594c0fa607eb96b9e342db7e13e57454fd0e707111aaef3d7db7de491cb1-json.log",
	        "Name": "/addons-069800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-069800:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-069800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/94d34689285ba51247d2848e184d0f6197d1580c62b093cb0ac78cfa1cc3b74a-init/diff:/var/lib/docker/overlay2/56755d81a5447e9a4d21cbfbceb5eeee713182a8ca21fd0322f2eb2e99f83e1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94d34689285ba51247d2848e184d0f6197d1580c62b093cb0ac78cfa1cc3b74a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94d34689285ba51247d2848e184d0f6197d1580c62b093cb0ac78cfa1cc3b74a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94d34689285ba51247d2848e184d0f6197d1580c62b093cb0ac78cfa1cc3b74a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-069800",
	                "Source": "/var/lib/docker/volumes/addons-069800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-069800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-069800",
	                "name.minikube.sigs.k8s.io": "addons-069800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed2d6b6a2a27619255a73cf3abbdfbfc9a1ce7e59b76471ff5a5d296ce156939",
	            "SandboxKey": "/var/run/docker/netns/ed2d6b6a2a27",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33928"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33929"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33932"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33930"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33931"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-069800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "9e3f503e3fdc37a9a1e5cfba2a5a992b9254e1e33e995e716a0cca5c06e4bb56",
	                    "EndpointID": "3a7b53d2cdadb4cac35852439a9aae163c66ba7db8581d33baf07192e4b09f67",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-069800",
	                        "c49d594c0fa6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-069800 -n addons-069800
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-069800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-069800 logs -n 25: (1.688054126s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-166545   | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | -p download-only-166545              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| delete  | -p download-only-166545              | download-only-166545   | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| start   | -o=json --download-only              | download-only-327825   | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | -p download-only-327825              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| delete  | -p download-only-327825              | download-only-327825   | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| delete  | -p download-only-166545              | download-only-166545   | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| delete  | -p download-only-327825              | download-only-327825   | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| start   | --download-only -p                   | download-docker-237579 | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | download-docker-237579               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-237579            | download-docker-237579 | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| start   | --download-only -p                   | binary-mirror-258519   | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | binary-mirror-258519                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40993               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-258519              | binary-mirror-258519   | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| addons  | disable dashboard -p                 | addons-069800          | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | addons-069800                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-069800          | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | addons-069800                        |                        |         |         |                     |                     |
	| start   | -p addons-069800 --wait=true         | addons-069800          | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:23 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 20:21:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 20:21:23.763944 1145786 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:21:23.764566 1145786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:21:23.764578 1145786 out.go:358] Setting ErrFile to fd 2...
	I0819 20:21:23.764584 1145786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:21:23.764834 1145786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
	I0819 20:21:23.765308 1145786 out.go:352] Setting JSON to false
	I0819 20:21:23.766168 1145786 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":14631,"bootTime":1724084253,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 20:21:23.766238 1145786 start.go:139] virtualization:  
	I0819 20:21:23.768404 1145786 out.go:177] * [addons-069800] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 20:21:23.770231 1145786 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 20:21:23.770301 1145786 notify.go:220] Checking for updates...
	I0819 20:21:23.773284 1145786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 20:21:23.774611 1145786 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	I0819 20:21:23.775904 1145786 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	I0819 20:21:23.777297 1145786 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 20:21:23.778703 1145786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 20:21:23.780201 1145786 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 20:21:23.811727 1145786 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 20:21:23.811872 1145786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:21:23.870226 1145786 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 20:21:23.860846149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:21:23.870340 1145786 docker.go:307] overlay module found
	I0819 20:21:23.872901 1145786 out.go:177] * Using the docker driver based on user configuration
	I0819 20:21:23.874710 1145786 start.go:297] selected driver: docker
	I0819 20:21:23.874727 1145786 start.go:901] validating driver "docker" against <nil>
	I0819 20:21:23.874740 1145786 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 20:21:23.875371 1145786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:21:23.928016 1145786 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 20:21:23.918628307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:21:23.928187 1145786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 20:21:23.928452 1145786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 20:21:23.929778 1145786 out.go:177] * Using Docker driver with root privileges
	I0819 20:21:23.930943 1145786 cni.go:84] Creating CNI manager for ""
	I0819 20:21:23.930970 1145786 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 20:21:23.930980 1145786 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 20:21:23.931052 1145786 start.go:340] cluster config:
	{Name:addons-069800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-069800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:21:23.932640 1145786 out.go:177] * Starting "addons-069800" primary control-plane node in "addons-069800" cluster
	I0819 20:21:23.934004 1145786 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 20:21:23.935342 1145786 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 20:21:23.936812 1145786 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 20:21:23.936870 1145786 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-1139612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0819 20:21:23.936884 1145786 cache.go:56] Caching tarball of preloaded images
	I0819 20:21:23.936890 1145786 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 20:21:23.936966 1145786 preload.go:172] Found /home/jenkins/minikube-integration/19423-1139612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 20:21:23.936976 1145786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0819 20:21:23.937313 1145786 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/config.json ...
	I0819 20:21:23.937341 1145786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/config.json: {Name:mkb409ab7d6a8710a2c66ca5025711b972df672f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:23.952005 1145786 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 20:21:23.952127 1145786 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 20:21:23.952151 1145786 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 20:21:23.952160 1145786 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 20:21:23.952168 1145786 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 20:21:23.952174 1145786 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 20:21:40.662882 1145786 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 20:21:40.662923 1145786 cache.go:194] Successfully downloaded all kic artifacts
	I0819 20:21:40.662976 1145786 start.go:360] acquireMachinesLock for addons-069800: {Name:mk231a0c2181cad7ca823188f94c17708149b8ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 20:21:40.663117 1145786 start.go:364] duration metric: took 114.516µs to acquireMachinesLock for "addons-069800"
	I0819 20:21:40.663154 1145786 start.go:93] Provisioning new machine with config: &{Name:addons-069800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-069800 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 20:21:40.663248 1145786 start.go:125] createHost starting for "" (driver="docker")
	I0819 20:21:40.665442 1145786 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 20:21:40.665703 1145786 start.go:159] libmachine.API.Create for "addons-069800" (driver="docker")
	I0819 20:21:40.665738 1145786 client.go:168] LocalClient.Create starting
	I0819 20:21:40.665845 1145786 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem
	I0819 20:21:41.126496 1145786 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/cert.pem
	I0819 20:21:41.688505 1145786 cli_runner.go:164] Run: docker network inspect addons-069800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 20:21:41.703507 1145786 cli_runner.go:211] docker network inspect addons-069800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 20:21:41.703625 1145786 network_create.go:284] running [docker network inspect addons-069800] to gather additional debugging logs...
	I0819 20:21:41.703650 1145786 cli_runner.go:164] Run: docker network inspect addons-069800
	W0819 20:21:41.719847 1145786 cli_runner.go:211] docker network inspect addons-069800 returned with exit code 1
	I0819 20:21:41.719875 1145786 network_create.go:287] error running [docker network inspect addons-069800]: docker network inspect addons-069800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-069800 not found
	I0819 20:21:41.719906 1145786 network_create.go:289] output of [docker network inspect addons-069800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-069800 not found
	
	** /stderr **
	I0819 20:21:41.720007 1145786 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 20:21:41.735467 1145786 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017d5e60}
	I0819 20:21:41.735507 1145786 network_create.go:124] attempt to create docker network addons-069800 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 20:21:41.735574 1145786 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-069800 addons-069800
	I0819 20:21:41.800561 1145786 network_create.go:108] docker network addons-069800 192.168.49.0/24 created
	I0819 20:21:41.800589 1145786 kic.go:121] calculated static IP "192.168.49.2" for the "addons-069800" container
	I0819 20:21:41.800658 1145786 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 20:21:41.815836 1145786 cli_runner.go:164] Run: docker volume create addons-069800 --label name.minikube.sigs.k8s.io=addons-069800 --label created_by.minikube.sigs.k8s.io=true
	I0819 20:21:41.831424 1145786 oci.go:103] Successfully created a docker volume addons-069800
	I0819 20:21:41.831520 1145786 cli_runner.go:164] Run: docker run --rm --name addons-069800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069800 --entrypoint /usr/bin/test -v addons-069800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0819 20:21:43.355633 1145786 cli_runner.go:217] Completed: docker run --rm --name addons-069800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069800 --entrypoint /usr/bin/test -v addons-069800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (1.524066246s)
	I0819 20:21:43.355663 1145786 oci.go:107] Successfully prepared a docker volume addons-069800
	I0819 20:21:43.355688 1145786 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 20:21:43.355707 1145786 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 20:21:43.355791 1145786 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-1139612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-069800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 20:21:47.710637 1145786 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-1139612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-069800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.354803872s)
	I0819 20:21:47.710676 1145786 kic.go:203] duration metric: took 4.354964762s to extract preloaded images to volume ...
	W0819 20:21:47.710827 1145786 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 20:21:47.710952 1145786 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 20:21:47.760990 1145786 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-069800 --name addons-069800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-069800 --network addons-069800 --ip 192.168.49.2 --volume addons-069800:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0819 20:21:48.050633 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Running}}
	I0819 20:21:48.075989 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:21:48.103674 1145786 cli_runner.go:164] Run: docker exec addons-069800 stat /var/lib/dpkg/alternatives/iptables
	I0819 20:21:48.170054 1145786 oci.go:144] the created container "addons-069800" has a running status.
	I0819 20:21:48.170086 1145786 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa...
	I0819 20:21:48.612204 1145786 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 20:21:48.642682 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:21:48.669426 1145786 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 20:21:48.669450 1145786 kic_runner.go:114] Args: [docker exec --privileged addons-069800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 20:21:48.753794 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:21:48.778701 1145786 machine.go:93] provisionDockerMachine start ...
	I0819 20:21:48.778798 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:21:48.810290 1145786 main.go:141] libmachine: Using SSH client type: native
	I0819 20:21:48.810570 1145786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33928 <nil> <nil>}
	I0819 20:21:48.810589 1145786 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 20:21:48.968257 1145786 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-069800
	
	I0819 20:21:48.968284 1145786 ubuntu.go:169] provisioning hostname "addons-069800"
	I0819 20:21:48.968404 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:21:48.992752 1145786 main.go:141] libmachine: Using SSH client type: native
	I0819 20:21:48.993010 1145786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33928 <nil> <nil>}
	I0819 20:21:48.993029 1145786 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-069800 && echo "addons-069800" | sudo tee /etc/hostname
	I0819 20:21:49.142399 1145786 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-069800
	
	I0819 20:21:49.142473 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:21:49.162585 1145786 main.go:141] libmachine: Using SSH client type: native
	I0819 20:21:49.162837 1145786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33928 <nil> <nil>}
	I0819 20:21:49.162852 1145786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-069800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-069800/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-069800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 20:21:49.304080 1145786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 20:21:49.304108 1145786 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19423-1139612/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-1139612/.minikube}
	I0819 20:21:49.304130 1145786 ubuntu.go:177] setting up certificates
	I0819 20:21:49.304140 1145786 provision.go:84] configureAuth start
	I0819 20:21:49.304215 1145786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069800
	I0819 20:21:49.320841 1145786 provision.go:143] copyHostCerts
	I0819 20:21:49.320930 1145786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.pem (1078 bytes)
	I0819 20:21:49.321059 1145786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-1139612/.minikube/cert.pem (1123 bytes)
	I0819 20:21:49.321119 1145786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-1139612/.minikube/key.pem (1675 bytes)
	I0819 20:21:49.321176 1145786 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca-key.pem org=jenkins.addons-069800 san=[127.0.0.1 192.168.49.2 addons-069800 localhost minikube]
	I0819 20:21:50.630844 1145786 provision.go:177] copyRemoteCerts
	I0819 20:21:50.630929 1145786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 20:21:50.630977 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:21:50.647399 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:21:50.741184 1145786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 20:21:50.765107 1145786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 20:21:50.789245 1145786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 20:21:50.813630 1145786 provision.go:87] duration metric: took 1.509465727s to configureAuth
	I0819 20:21:50.813656 1145786 ubuntu.go:193] setting minikube options for container-runtime
	I0819 20:21:50.813852 1145786 config.go:182] Loaded profile config "addons-069800": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 20:21:50.813868 1145786 machine.go:96] duration metric: took 2.035147147s to provisionDockerMachine
	I0819 20:21:50.813875 1145786 client.go:171] duration metric: took 10.148130262s to LocalClient.Create
	I0819 20:21:50.813895 1145786 start.go:167] duration metric: took 10.148193447s to libmachine.API.Create "addons-069800"
	I0819 20:21:50.813907 1145786 start.go:293] postStartSetup for "addons-069800" (driver="docker")
	I0819 20:21:50.813917 1145786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 20:21:50.813970 1145786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 20:21:50.814025 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:21:50.830192 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:21:50.925274 1145786 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 20:21:50.928415 1145786 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 20:21:50.928460 1145786 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 20:21:50.928473 1145786 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 20:21:50.928480 1145786 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 20:21:50.928490 1145786 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1139612/.minikube/addons for local assets ...
	I0819 20:21:50.928564 1145786 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1139612/.minikube/files for local assets ...
	I0819 20:21:50.928593 1145786 start.go:296] duration metric: took 114.679683ms for postStartSetup
	I0819 20:21:50.928912 1145786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069800
	I0819 20:21:50.944515 1145786 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/config.json ...
	I0819 20:21:50.944815 1145786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:21:50.944873 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:21:50.961025 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:21:51.053224 1145786 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 20:21:51.057868 1145786 start.go:128] duration metric: took 10.394580621s to createHost
	I0819 20:21:51.057893 1145786 start.go:83] releasing machines lock for "addons-069800", held for 10.394759489s
	I0819 20:21:51.057964 1145786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069800
	I0819 20:21:51.075243 1145786 ssh_runner.go:195] Run: cat /version.json
	I0819 20:21:51.075304 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:21:51.075330 1145786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 20:21:51.075399 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:21:51.097037 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:21:51.110706 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:21:51.334753 1145786 ssh_runner.go:195] Run: systemctl --version
	I0819 20:21:51.339283 1145786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 20:21:51.343576 1145786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0819 20:21:51.368916 1145786 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0819 20:21:51.369019 1145786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:21:51.399095 1145786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 20:21:51.399123 1145786 start.go:495] detecting cgroup driver to use...
	I0819 20:21:51.399157 1145786 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 20:21:51.399222 1145786 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 20:21:51.412298 1145786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 20:21:51.423884 1145786 docker.go:217] disabling cri-docker service (if available) ...
	I0819 20:21:51.423958 1145786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 20:21:51.438022 1145786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 20:21:51.453147 1145786 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 20:21:51.540364 1145786 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 20:21:51.636928 1145786 docker.go:233] disabling docker service ...
	I0819 20:21:51.637012 1145786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 20:21:51.660251 1145786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 20:21:51.672536 1145786 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 20:21:51.760719 1145786 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 20:21:51.851509 1145786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 20:21:51.862888 1145786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 20:21:51.878639 1145786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 20:21:51.888350 1145786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 20:21:51.898312 1145786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 20:21:51.898419 1145786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 20:21:51.908982 1145786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 20:21:51.919374 1145786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 20:21:51.929640 1145786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 20:21:51.939342 1145786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 20:21:51.949232 1145786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 20:21:51.959408 1145786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 20:21:51.970130 1145786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 20:21:51.980610 1145786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 20:21:51.989552 1145786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 20:21:51.998231 1145786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:21:52.085592 1145786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 20:21:52.205855 1145786 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0819 20:21:52.206000 1145786 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0819 20:21:52.209849 1145786 start.go:563] Will wait 60s for crictl version
	I0819 20:21:52.209957 1145786 ssh_runner.go:195] Run: which crictl
	I0819 20:21:52.213601 1145786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 20:21:52.259011 1145786 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0819 20:21:52.259143 1145786 ssh_runner.go:195] Run: containerd --version
	I0819 20:21:52.281100 1145786 ssh_runner.go:195] Run: containerd --version
	I0819 20:21:52.308045 1145786 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0819 20:21:52.310618 1145786 cli_runner.go:164] Run: docker network inspect addons-069800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 20:21:52.326235 1145786 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 20:21:52.330189 1145786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:21:52.341667 1145786 kubeadm.go:883] updating cluster {Name:addons-069800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-069800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 20:21:52.341804 1145786 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 20:21:52.341883 1145786 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:21:52.383816 1145786 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 20:21:52.383841 1145786 containerd.go:534] Images already preloaded, skipping extraction
	I0819 20:21:52.383905 1145786 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:21:52.420560 1145786 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 20:21:52.420585 1145786 cache_images.go:84] Images are preloaded, skipping loading
	I0819 20:21:52.420595 1145786 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0819 20:21:52.420774 1145786 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-069800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-069800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 20:21:52.420850 1145786 ssh_runner.go:195] Run: sudo crictl info
	I0819 20:21:52.457659 1145786 cni.go:84] Creating CNI manager for ""
	I0819 20:21:52.457684 1145786 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 20:21:52.457695 1145786 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 20:21:52.457718 1145786 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-069800 NodeName:addons-069800 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 20:21:52.457857 1145786 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-069800"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 20:21:52.457927 1145786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 20:21:52.467065 1145786 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 20:21:52.467137 1145786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 20:21:52.475885 1145786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 20:21:52.494582 1145786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 20:21:52.512950 1145786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0819 20:21:52.531131 1145786 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 20:21:52.534560 1145786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:21:52.545498 1145786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:21:52.629856 1145786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:21:52.647322 1145786 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800 for IP: 192.168.49.2
	I0819 20:21:52.647390 1145786 certs.go:194] generating shared ca certs ...
	I0819 20:21:52.647422 1145786 certs.go:226] acquiring lock for ca certs: {Name:mk862c79d80b8fe3a5df83b1592928b3403a862f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:52.647598 1145786 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.key
	I0819 20:21:53.746529 1145786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.crt ...
	I0819 20:21:53.746561 1145786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.crt: {Name:mkfbcd944e2a55bd14cc843401c33677a6352712 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:53.746763 1145786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.key ...
	I0819 20:21:53.746777 1145786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.key: {Name:mk1c7199e5adf64741aba94623a6bb65987a84d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:53.747340 1145786 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/proxy-client-ca.key
	I0819 20:21:55.055838 1145786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1139612/.minikube/proxy-client-ca.crt ...
	I0819 20:21:55.055929 1145786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/proxy-client-ca.crt: {Name:mkc8b520163fc3514b1cfbd46ccdce66401a0fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:55.056663 1145786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1139612/.minikube/proxy-client-ca.key ...
	I0819 20:21:55.056726 1145786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/proxy-client-ca.key: {Name:mk7e1d8079f0ce14e406b4e4a60f290313cd793b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:55.056891 1145786 certs.go:256] generating profile certs ...
	I0819 20:21:55.056993 1145786 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.key
	I0819 20:21:55.057039 1145786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt with IP's: []
	I0819 20:21:55.381741 1145786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt ...
	I0819 20:21:55.381774 1145786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: {Name:mkf617ef8b1f8a28ae59913b9d45c4c6242fed85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:55.381970 1145786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.key ...
	I0819 20:21:55.381983 1145786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.key: {Name:mka210fee30cee99cbae900461a6b04a3f7e541e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:55.382061 1145786 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/apiserver.key.a5b07a97
	I0819 20:21:55.382077 1145786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/apiserver.crt.a5b07a97 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 20:21:55.810559 1145786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/apiserver.crt.a5b07a97 ...
	I0819 20:21:55.810594 1145786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/apiserver.crt.a5b07a97: {Name:mk6338e5fc76c34c832bd62a307aab4fa1e2976e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:55.810815 1145786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/apiserver.key.a5b07a97 ...
	I0819 20:21:55.810831 1145786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/apiserver.key.a5b07a97: {Name:mkaffb34f47d735ee32622c69980fbb94a99e702 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:55.811432 1145786 certs.go:381] copying /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/apiserver.crt.a5b07a97 -> /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/apiserver.crt
	I0819 20:21:55.811536 1145786 certs.go:385] copying /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/apiserver.key.a5b07a97 -> /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/apiserver.key
	I0819 20:21:55.811595 1145786 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/proxy-client.key
	I0819 20:21:55.811617 1145786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/proxy-client.crt with IP's: []
	I0819 20:21:55.967248 1145786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/proxy-client.crt ...
	I0819 20:21:55.967282 1145786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/proxy-client.crt: {Name:mk820ed202919d2478f0dfc12e3dad155a3672f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:55.967890 1145786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/proxy-client.key ...
	I0819 20:21:55.967908 1145786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/proxy-client.key: {Name:mk30ec7fb5695db03e6cf665d7ab36e30338996c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:55.968115 1145786 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 20:21:55.968167 1145786 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem (1078 bytes)
	I0819 20:21:55.968199 1145786 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/cert.pem (1123 bytes)
	I0819 20:21:55.968248 1145786 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/key.pem (1675 bytes)
	I0819 20:21:55.968850 1145786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 20:21:55.994770 1145786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 20:21:56.025037 1145786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 20:21:56.051283 1145786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 20:21:56.078333 1145786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 20:21:56.104522 1145786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 20:21:56.128814 1145786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 20:21:56.154259 1145786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 20:21:56.179435 1145786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 20:21:56.204308 1145786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 20:21:56.223294 1145786 ssh_runner.go:195] Run: openssl version
	I0819 20:21:56.228862 1145786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 20:21:56.238598 1145786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:21:56.242494 1145786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:21:56.242623 1145786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:21:56.249834 1145786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 20:21:56.259429 1145786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 20:21:56.263250 1145786 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 20:21:56.263349 1145786 kubeadm.go:392] StartCluster: {Name:addons-069800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-069800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:21:56.263468 1145786 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0819 20:21:56.263575 1145786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 20:21:56.304388 1145786 cri.go:89] found id: ""
	I0819 20:21:56.304529 1145786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 20:21:56.315144 1145786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 20:21:56.324421 1145786 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 20:21:56.324514 1145786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 20:21:56.333552 1145786 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 20:21:56.333576 1145786 kubeadm.go:157] found existing configuration files:
	
	I0819 20:21:56.333631 1145786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 20:21:56.342614 1145786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 20:21:56.342719 1145786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 20:21:56.351149 1145786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 20:21:56.360519 1145786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 20:21:56.360589 1145786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 20:21:56.369347 1145786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 20:21:56.378231 1145786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 20:21:56.378295 1145786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 20:21:56.386827 1145786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 20:21:56.395795 1145786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 20:21:56.395866 1145786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 20:21:56.404683 1145786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 20:21:56.450467 1145786 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 20:21:56.450718 1145786 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 20:21:56.478512 1145786 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 20:21:56.478630 1145786 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0819 20:21:56.478695 1145786 kubeadm.go:310] OS: Linux
	I0819 20:21:56.478777 1145786 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 20:21:56.478856 1145786 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 20:21:56.478922 1145786 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 20:21:56.479001 1145786 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 20:21:56.479080 1145786 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 20:21:56.479180 1145786 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 20:21:56.479271 1145786 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 20:21:56.479355 1145786 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 20:21:56.479432 1145786 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 20:21:56.550439 1145786 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 20:21:56.550604 1145786 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 20:21:56.550718 1145786 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 20:21:56.556087 1145786 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 20:21:56.561777 1145786 out.go:235]   - Generating certificates and keys ...
	I0819 20:21:56.561977 1145786 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 20:21:56.562063 1145786 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 20:21:56.792720 1145786 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 20:21:57.031738 1145786 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 20:21:57.499510 1145786 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 20:21:57.839035 1145786 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 20:21:58.245542 1145786 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 20:21:58.245901 1145786 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-069800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 20:21:58.532491 1145786 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 20:21:58.532701 1145786 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-069800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 20:21:59.053707 1145786 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 20:22:00.311351 1145786 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 20:22:00.570021 1145786 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 20:22:00.570318 1145786 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 20:22:01.379043 1145786 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 20:22:01.687887 1145786 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 20:22:01.818989 1145786 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 20:22:02.570462 1145786 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 20:22:03.111630 1145786 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 20:22:03.113137 1145786 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 20:22:03.117263 1145786 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 20:22:03.120918 1145786 out.go:235]   - Booting up control plane ...
	I0819 20:22:03.121034 1145786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 20:22:03.121121 1145786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 20:22:03.122378 1145786 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 20:22:03.134563 1145786 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 20:22:03.141312 1145786 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 20:22:03.141377 1145786 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 20:22:03.240709 1145786 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 20:22:03.240837 1145786 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 20:22:04.250187 1145786 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.010985068s
	I0819 20:22:04.250271 1145786 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 20:22:10.251562 1145786 kubeadm.go:310] [api-check] The API server is healthy after 6.001342801s
	I0819 20:22:10.271751 1145786 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 20:22:10.286925 1145786 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 20:22:10.318667 1145786 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 20:22:10.318860 1145786 kubeadm.go:310] [mark-control-plane] Marking the node addons-069800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 20:22:10.330893 1145786 kubeadm.go:310] [bootstrap-token] Using token: l6ctr7.1un5006nxdtq0mpv
	I0819 20:22:10.333993 1145786 out.go:235]   - Configuring RBAC rules ...
	I0819 20:22:10.334120 1145786 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 20:22:10.341211 1145786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 20:22:10.350023 1145786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 20:22:10.354424 1145786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 20:22:10.358634 1145786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 20:22:10.363160 1145786 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 20:22:10.658349 1145786 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 20:22:11.094166 1145786 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 20:22:11.658878 1145786 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 20:22:11.659943 1145786 kubeadm.go:310] 
	I0819 20:22:11.660036 1145786 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 20:22:11.660043 1145786 kubeadm.go:310] 
	I0819 20:22:11.660129 1145786 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 20:22:11.660134 1145786 kubeadm.go:310] 
	I0819 20:22:11.660162 1145786 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 20:22:11.660257 1145786 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 20:22:11.660316 1145786 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 20:22:11.660330 1145786 kubeadm.go:310] 
	I0819 20:22:11.660382 1145786 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 20:22:11.660387 1145786 kubeadm.go:310] 
	I0819 20:22:11.660448 1145786 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 20:22:11.660454 1145786 kubeadm.go:310] 
	I0819 20:22:11.660514 1145786 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 20:22:11.660603 1145786 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 20:22:11.660686 1145786 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 20:22:11.660691 1145786 kubeadm.go:310] 
	I0819 20:22:11.660794 1145786 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 20:22:11.660877 1145786 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 20:22:11.660882 1145786 kubeadm.go:310] 
	I0819 20:22:11.660972 1145786 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l6ctr7.1un5006nxdtq0mpv \
	I0819 20:22:11.661087 1145786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:829b70aabb92546346ffb009b5800fc9f292141d7fa24530fc51fd3a8a989ff0 \
	I0819 20:22:11.661113 1145786 kubeadm.go:310] 	--control-plane 
	I0819 20:22:11.661122 1145786 kubeadm.go:310] 
	I0819 20:22:11.661222 1145786 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 20:22:11.661236 1145786 kubeadm.go:310] 
	I0819 20:22:11.661333 1145786 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l6ctr7.1un5006nxdtq0mpv \
	I0819 20:22:11.661451 1145786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:829b70aabb92546346ffb009b5800fc9f292141d7fa24530fc51fd3a8a989ff0 
	I0819 20:22:11.664305 1145786 kubeadm.go:310] W0819 20:21:56.447241    1028 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 20:22:11.664602 1145786 kubeadm.go:310] W0819 20:21:56.448055    1028 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 20:22:11.664878 1145786 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0819 20:22:11.665006 1145786 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 20:22:11.665033 1145786 cni.go:84] Creating CNI manager for ""
	I0819 20:22:11.665046 1145786 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 20:22:11.668038 1145786 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 20:22:11.670550 1145786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 20:22:11.674422 1145786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 20:22:11.674439 1145786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 20:22:11.695774 1145786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 20:22:11.980961 1145786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:11.981078 1145786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-069800 minikube.k8s.io/updated_at=2024_08_19T20_22_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7253360125032c7e2214e25ff4b5c894ae5844e8 minikube.k8s.io/name=addons-069800 minikube.k8s.io/primary=true
	I0819 20:22:11.981163 1145786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 20:22:12.225018 1145786 ops.go:34] apiserver oom_adj: -16
	I0819 20:22:12.225116 1145786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:12.725261 1145786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:13.225917 1145786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:13.725329 1145786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:14.225259 1145786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:14.725430 1145786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:15.225476 1145786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:15.725272 1145786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:16.225857 1145786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:16.362048 1145786 kubeadm.go:1113] duration metric: took 4.381146964s to wait for elevateKubeSystemPrivileges
	I0819 20:22:16.362074 1145786 kubeadm.go:394] duration metric: took 20.098729214s to StartCluster
	I0819 20:22:16.362090 1145786 settings.go:142] acquiring lock: {Name:mk42a43a496b3883d027e9bc4cab1df0994edc4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:22:16.362609 1145786 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-1139612/kubeconfig
	I0819 20:22:16.363063 1145786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/kubeconfig: {Name:mk04c9370af3a3baaacd607c194f214d66561798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:22:16.363268 1145786 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 20:22:16.363358 1145786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 20:22:16.363647 1145786 config.go:182] Loaded profile config "addons-069800": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 20:22:16.363675 1145786 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 20:22:16.363757 1145786 addons.go:69] Setting yakd=true in profile "addons-069800"
	I0819 20:22:16.363785 1145786 addons.go:234] Setting addon yakd=true in "addons-069800"
	I0819 20:22:16.363809 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.364341 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.364750 1145786 addons.go:69] Setting cloud-spanner=true in profile "addons-069800"
	I0819 20:22:16.364778 1145786 addons.go:234] Setting addon cloud-spanner=true in "addons-069800"
	I0819 20:22:16.364786 1145786 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-069800"
	I0819 20:22:16.364803 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.364809 1145786 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-069800"
	I0819 20:22:16.364847 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.365205 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.365246 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.367831 1145786 addons.go:69] Setting registry=true in profile "addons-069800"
	I0819 20:22:16.368014 1145786 addons.go:234] Setting addon registry=true in "addons-069800"
	I0819 20:22:16.368148 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.368982 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.368487 1145786 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-069800"
	I0819 20:22:16.371386 1145786 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-069800"
	I0819 20:22:16.371452 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.372039 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.372108 1145786 addons.go:69] Setting storage-provisioner=true in profile "addons-069800"
	I0819 20:22:16.372143 1145786 addons.go:234] Setting addon storage-provisioner=true in "addons-069800"
	I0819 20:22:16.372195 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.374588 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.368495 1145786 addons.go:69] Setting default-storageclass=true in profile "addons-069800"
	I0819 20:22:16.388624 1145786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-069800"
	I0819 20:22:16.388960 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.399820 1145786 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-069800"
	I0819 20:22:16.399861 1145786 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-069800"
	I0819 20:22:16.400194 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.368507 1145786 addons.go:69] Setting gcp-auth=true in profile "addons-069800"
	I0819 20:22:16.412552 1145786 mustload.go:65] Loading cluster: addons-069800
	I0819 20:22:16.412736 1145786 config.go:182] Loaded profile config "addons-069800": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 20:22:16.412990 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.418809 1145786 addons.go:69] Setting volcano=true in profile "addons-069800"
	I0819 20:22:16.418862 1145786 addons.go:234] Setting addon volcano=true in "addons-069800"
	I0819 20:22:16.418901 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.419364 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.368512 1145786 addons.go:69] Setting ingress=true in profile "addons-069800"
	I0819 20:22:16.426853 1145786 addons.go:234] Setting addon ingress=true in "addons-069800"
	I0819 20:22:16.426903 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.368517 1145786 addons.go:69] Setting ingress-dns=true in profile "addons-069800"
	I0819 20:22:16.427458 1145786 addons.go:234] Setting addon ingress-dns=true in "addons-069800"
	I0819 20:22:16.427497 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.427953 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.461848 1145786 addons.go:69] Setting volumesnapshots=true in profile "addons-069800"
	I0819 20:22:16.461888 1145786 addons.go:234] Setting addon volumesnapshots=true in "addons-069800"
	I0819 20:22:16.461925 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.462390 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.368520 1145786 addons.go:69] Setting inspektor-gadget=true in profile "addons-069800"
	I0819 20:22:16.470268 1145786 addons.go:234] Setting addon inspektor-gadget=true in "addons-069800"
	I0819 20:22:16.470312 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.470775 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.368523 1145786 addons.go:69] Setting metrics-server=true in profile "addons-069800"
	I0819 20:22:16.478929 1145786 addons.go:234] Setting addon metrics-server=true in "addons-069800"
	I0819 20:22:16.478971 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.479433 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.492765 1145786 out.go:177] * Verifying Kubernetes components...
	I0819 20:22:16.495771 1145786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:22:16.504106 1145786 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 20:22:16.510528 1145786 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 20:22:16.510559 1145786 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 20:22:16.510635 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:16.565849 1145786 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-069800"
	I0819 20:22:16.565943 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.564829 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.587351 1145786 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 20:22:16.601938 1145786 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 20:22:16.587589 1145786 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 20:22:16.604566 1145786 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 20:22:16.604587 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 20:22:16.604653 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:16.587831 1145786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:22:16.605312 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.635508 1145786 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 20:22:16.635918 1145786 addons.go:234] Setting addon default-storageclass=true in "addons-069800"
	I0819 20:22:16.635973 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.636470 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:16.646713 1145786 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 20:22:16.649507 1145786 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 20:22:16.649531 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 20:22:16.649601 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:16.657883 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:16.659500 1145786 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 20:22:16.659518 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 20:22:16.659573 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:16.673409 1145786 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 20:22:16.676163 1145786 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 20:22:16.678845 1145786 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 20:22:16.679044 1145786 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 20:22:16.679059 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 20:22:16.679127 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:16.697053 1145786 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 20:22:16.704363 1145786 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 20:22:16.708369 1145786 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 20:22:16.713096 1145786 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 20:22:16.718464 1145786 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 20:22:16.718499 1145786 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 20:22:16.718653 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:16.731525 1145786 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 20:22:16.732498 1145786 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 20:22:16.732711 1145786 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 20:22:16.732898 1145786 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0819 20:22:16.733042 1145786 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 20:22:16.733081 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:16.741417 1145786 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 20:22:16.741437 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 20:22:16.741505 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:16.769782 1145786 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0819 20:22:16.772675 1145786 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 20:22:16.772703 1145786 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 20:22:16.772777 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:16.776918 1145786 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0819 20:22:16.780828 1145786 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0819 20:22:16.780854 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0819 20:22:16.780922 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:16.817309 1145786 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 20:22:16.817336 1145786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 20:22:16.817404 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:16.818748 1145786 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 20:22:16.818764 1145786 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 20:22:16.818826 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:16.834139 1145786 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 20:22:16.834440 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:16.835771 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:16.842761 1145786 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 20:22:16.842842 1145786 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 20:22:16.842929 1145786 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 20:22:16.843782 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:16.846701 1145786 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 20:22:16.846723 1145786 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 20:22:16.846788 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:16.847204 1145786 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 20:22:16.847218 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 20:22:16.847261 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:16.876416 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:16.876978 1145786 out.go:177]   - Using image docker.io/busybox:stable
	I0819 20:22:16.882286 1145786 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 20:22:16.882308 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 20:22:16.882439 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:16.923748 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:16.924379 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:16.952353 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:16.961617 1145786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 20:22:16.962126 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:16.992972 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:17.004388 1145786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:22:17.009081 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:17.009996 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:17.016406 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:17.019671 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	W0819 20:22:17.021695 1145786 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 20:22:17.021724 1145786 retry.go:31] will retry after 234.196006ms: ssh: handshake failed: EOF
	W0819 20:22:17.257256 1145786 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 20:22:17.257337 1145786 retry.go:31] will retry after 445.494623ms: ssh: handshake failed: EOF
	I0819 20:22:17.518122 1145786 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 20:22:17.518210 1145786 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 20:22:17.585206 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 20:22:17.601422 1145786 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 20:22:17.601501 1145786 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 20:22:17.613055 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 20:22:17.625439 1145786 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 20:22:17.625515 1145786 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 20:22:17.633426 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 20:22:17.685834 1145786 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 20:22:17.685904 1145786 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 20:22:17.832880 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 20:22:17.843633 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 20:22:17.864568 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 20:22:17.894075 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0819 20:22:17.905780 1145786 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 20:22:17.905856 1145786 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 20:22:17.971876 1145786 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 20:22:17.971950 1145786 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 20:22:18.036935 1145786 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 20:22:18.037012 1145786 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 20:22:18.049499 1145786 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 20:22:18.049574 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 20:22:18.120473 1145786 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 20:22:18.120556 1145786 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 20:22:18.133142 1145786 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 20:22:18.133216 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 20:22:18.338130 1145786 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 20:22:18.338203 1145786 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 20:22:18.346605 1145786 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 20:22:18.346684 1145786 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 20:22:18.464526 1145786 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 20:22:18.464601 1145786 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 20:22:18.498286 1145786 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 20:22:18.498361 1145786 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 20:22:18.569212 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 20:22:18.570454 1145786 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 20:22:18.570528 1145786 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 20:22:18.639407 1145786 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 20:22:18.639491 1145786 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 20:22:18.677996 1145786 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 20:22:18.678079 1145786 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 20:22:18.769410 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 20:22:18.830421 1145786 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 20:22:18.830494 1145786 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 20:22:18.889513 1145786 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 20:22:18.889584 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 20:22:18.950587 1145786 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 20:22:18.950664 1145786 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 20:22:19.017483 1145786 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 20:22:19.017561 1145786 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 20:22:19.020791 1145786 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 20:22:19.020863 1145786 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 20:22:19.121543 1145786 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.159887172s)
	I0819 20:22:19.121570 1145786 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 20:22:19.122649 1145786 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.118227249s)
	I0819 20:22:19.123479 1145786 node_ready.go:35] waiting up to 6m0s for node "addons-069800" to be "Ready" ...
	I0819 20:22:19.131841 1145786 node_ready.go:49] node "addons-069800" has status "Ready":"True"
	I0819 20:22:19.131870 1145786 node_ready.go:38] duration metric: took 8.375168ms for node "addons-069800" to be "Ready" ...
	I0819 20:22:19.131882 1145786 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:22:19.158758 1145786 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dqfzk" in "kube-system" namespace to be "Ready" ...
	I0819 20:22:19.294863 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 20:22:19.340743 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 20:22:19.509514 1145786 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 20:22:19.509538 1145786 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 20:22:19.512076 1145786 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 20:22:19.512099 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 20:22:19.521996 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.936704875s)
	I0819 20:22:19.560106 1145786 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 20:22:19.560176 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 20:22:19.626303 1145786 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-069800" context rescaled to 1 replicas
	I0819 20:22:19.728945 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 20:22:19.735154 1145786 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 20:22:19.735227 1145786 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 20:22:19.806863 1145786 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 20:22:19.806937 1145786 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 20:22:19.970017 1145786 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 20:22:19.970091 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 20:22:20.053532 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.440363625s)
	I0819 20:22:20.058734 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 20:22:20.318670 1145786 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 20:22:20.318743 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 20:22:20.411228 1145786 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 20:22:20.411299 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 20:22:20.554834 1145786 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 20:22:20.554915 1145786 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 20:22:20.801365 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 20:22:21.155307 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.521805406s)
	I0819 20:22:21.155430 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.322490524s)
	I0819 20:22:21.155507 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.311803694s)
	I0819 20:22:21.187832 1145786 pod_ready.go:103] pod "coredns-6f6b679f8f-dqfzk" in "kube-system" namespace has status "Ready":"False"
	I0819 20:22:23.690526 1145786 pod_ready.go:103] pod "coredns-6f6b679f8f-dqfzk" in "kube-system" namespace has status "Ready":"False"
	I0819 20:22:23.893511 1145786 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 20:22:23.893594 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:23.922319 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:24.505692 1145786 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 20:22:24.701439 1145786 addons.go:234] Setting addon gcp-auth=true in "addons-069800"
	I0819 20:22:24.701537 1145786 host.go:66] Checking if "addons-069800" exists ...
	I0819 20:22:24.702085 1145786 cli_runner.go:164] Run: docker container inspect addons-069800 --format={{.State.Status}}
	I0819 20:22:24.731964 1145786 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 20:22:24.732019 1145786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069800
	I0819 20:22:24.760657 1145786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/addons-069800/id_rsa Username:docker}
	I0819 20:22:25.635074 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.770421303s)
	I0819 20:22:25.635172 1145786 addons.go:475] Verifying addon ingress=true in "addons-069800"
	I0819 20:22:25.638283 1145786 out.go:177] * Verifying ingress addon...
	I0819 20:22:25.642645 1145786 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 20:22:25.646082 1145786 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 20:22:25.646106 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:26.148851 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:26.166345 1145786 pod_ready.go:103] pod "coredns-6f6b679f8f-dqfzk" in "kube-system" namespace has status "Ready":"False"
	I0819 20:22:26.700035 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:27.158778 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:27.632097 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.737925139s)
	I0819 20:22:27.632171 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.062884856s)
	I0819 20:22:27.632195 1145786 addons.go:475] Verifying addon registry=true in "addons-069800"
	I0819 20:22:27.632392 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.862906112s)
	I0819 20:22:27.632671 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.337733726s)
	I0819 20:22:27.632700 1145786 addons.go:475] Verifying addon metrics-server=true in "addons-069800"
	I0819 20:22:27.632752 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.29193322s)
	I0819 20:22:27.632921 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.903901522s)
	W0819 20:22:27.632948 1145786 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 20:22:27.632963 1145786 retry.go:31] will retry after 275.74173ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 20:22:27.633052 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.574239223s)
	I0819 20:22:27.635462 1145786 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-069800 service yakd-dashboard -n yakd-dashboard
	
	I0819 20:22:27.635632 1145786 out.go:177] * Verifying registry addon...
	I0819 20:22:27.641005 1145786 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 20:22:27.689812 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:27.691893 1145786 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 20:22:27.691972 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:27.909389 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 20:22:28.160866 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:28.162174 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:28.177687 1145786 pod_ready.go:103] pod "coredns-6f6b679f8f-dqfzk" in "kube-system" namespace has status "Ready":"False"
	I0819 20:22:28.242091 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.440626624s)
	I0819 20:22:28.242139 1145786 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-069800"
	I0819 20:22:28.242323 1145786 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.510335405s)
	I0819 20:22:28.245498 1145786 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 20:22:28.245590 1145786 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 20:22:28.251204 1145786 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 20:22:28.254855 1145786 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 20:22:28.262441 1145786 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 20:22:28.262524 1145786 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 20:22:28.278702 1145786 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 20:22:28.278782 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:28.402519 1145786 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 20:22:28.402606 1145786 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 20:22:28.485514 1145786 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 20:22:28.485591 1145786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 20:22:28.542176 1145786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 20:22:28.645687 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:28.647877 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:28.756407 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:29.145966 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:29.148465 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:29.256041 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:29.527536 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.618098506s)
	I0819 20:22:29.653023 1145786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.110768112s)
	I0819 20:22:29.655094 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:29.656540 1145786 addons.go:475] Verifying addon gcp-auth=true in "addons-069800"
	I0819 20:22:29.657896 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:29.659796 1145786 out.go:177] * Verifying gcp-auth addon...
	I0819 20:22:29.664555 1145786 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 20:22:29.748571 1145786 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 20:22:29.756301 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:30.148768 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:30.149825 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:30.255894 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:30.647245 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:30.648687 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:30.665267 1145786 pod_ready.go:103] pod "coredns-6f6b679f8f-dqfzk" in "kube-system" namespace has status "Ready":"False"
	I0819 20:22:30.761400 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:31.153901 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:31.155436 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:31.265831 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:31.649712 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:31.651915 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:31.668105 1145786 pod_ready.go:93] pod "coredns-6f6b679f8f-dqfzk" in "kube-system" namespace has status "Ready":"True"
	I0819 20:22:31.668187 1145786 pod_ready.go:82] duration metric: took 12.509387945s for pod "coredns-6f6b679f8f-dqfzk" in "kube-system" namespace to be "Ready" ...
	I0819 20:22:31.668213 1145786 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qjpk8" in "kube-system" namespace to be "Ready" ...
	I0819 20:22:31.671246 1145786 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-qjpk8" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-qjpk8" not found
	I0819 20:22:31.671314 1145786 pod_ready.go:82] duration metric: took 3.056293ms for pod "coredns-6f6b679f8f-qjpk8" in "kube-system" namespace to be "Ready" ...
	E0819 20:22:31.671340 1145786 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-qjpk8" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-qjpk8" not found
	I0819 20:22:31.671360 1145786 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-069800" in "kube-system" namespace to be "Ready" ...
	I0819 20:22:31.680215 1145786 pod_ready.go:93] pod "etcd-addons-069800" in "kube-system" namespace has status "Ready":"True"
	I0819 20:22:31.680322 1145786 pod_ready.go:82] duration metric: took 8.923256ms for pod "etcd-addons-069800" in "kube-system" namespace to be "Ready" ...
	I0819 20:22:31.680354 1145786 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-069800" in "kube-system" namespace to be "Ready" ...
	I0819 20:22:31.688852 1145786 pod_ready.go:93] pod "kube-apiserver-addons-069800" in "kube-system" namespace has status "Ready":"True"
	I0819 20:22:31.688935 1145786 pod_ready.go:82] duration metric: took 8.559721ms for pod "kube-apiserver-addons-069800" in "kube-system" namespace to be "Ready" ...
	I0819 20:22:31.688963 1145786 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-069800" in "kube-system" namespace to be "Ready" ...
	I0819 20:22:31.698301 1145786 pod_ready.go:93] pod "kube-controller-manager-addons-069800" in "kube-system" namespace has status "Ready":"True"
	I0819 20:22:31.698384 1145786 pod_ready.go:82] duration metric: took 9.397148ms for pod "kube-controller-manager-addons-069800" in "kube-system" namespace to be "Ready" ...
	I0819 20:22:31.698471 1145786 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8tlfz" in "kube-system" namespace to be "Ready" ...
	I0819 20:22:31.760517 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:31.867959 1145786 pod_ready.go:93] pod "kube-proxy-8tlfz" in "kube-system" namespace has status "Ready":"True"
	I0819 20:22:31.868026 1145786 pod_ready.go:82] duration metric: took 169.527052ms for pod "kube-proxy-8tlfz" in "kube-system" namespace to be "Ready" ...
	I0819 20:22:31.868052 1145786 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-069800" in "kube-system" namespace to be "Ready" ...
	I0819 20:22:32.157784 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:32.159030 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:32.260309 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:32.263266 1145786 pod_ready.go:93] pod "kube-scheduler-addons-069800" in "kube-system" namespace has status "Ready":"True"
	I0819 20:22:32.263350 1145786 pod_ready.go:82] duration metric: took 395.275971ms for pod "kube-scheduler-addons-069800" in "kube-system" namespace to be "Ready" ...
	I0819 20:22:32.263382 1145786 pod_ready.go:39] duration metric: took 13.131482633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:22:32.263436 1145786 api_server.go:52] waiting for apiserver process to appear ...
	I0819 20:22:32.263552 1145786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:22:32.331477 1145786 api_server.go:72] duration metric: took 15.968181055s to wait for apiserver process to appear ...
	I0819 20:22:32.331553 1145786 api_server.go:88] waiting for apiserver healthz status ...
	I0819 20:22:32.331607 1145786 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:22:32.342608 1145786 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 20:22:32.343813 1145786 api_server.go:141] control plane version: v1.31.0
	I0819 20:22:32.343919 1145786 api_server.go:131] duration metric: took 12.333108ms to wait for apiserver health ...
	I0819 20:22:32.343948 1145786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 20:22:32.470794 1145786 system_pods.go:59] 18 kube-system pods found
	I0819 20:22:32.470890 1145786 system_pods.go:61] "coredns-6f6b679f8f-dqfzk" [30ee1dce-8fa4-4087-b5bf-a5d028117dc8] Running
	I0819 20:22:32.470914 1145786 system_pods.go:61] "csi-hostpath-attacher-0" [4e331384-7ca0-46af-8f8f-48fa51710733] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 20:22:32.470956 1145786 system_pods.go:61] "csi-hostpath-resizer-0" [d897c626-8a3f-4e8f-ad29-1c7ebe68fbe3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 20:22:32.470985 1145786 system_pods.go:61] "csi-hostpathplugin-wk2vn" [09397d9a-6f96-4039-9037-634d97ea4ec8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 20:22:32.471007 1145786 system_pods.go:61] "etcd-addons-069800" [4e17daa0-832b-482b-b57c-91dc33e6801a] Running
	I0819 20:22:32.471032 1145786 system_pods.go:61] "kindnet-777c5" [8c1b212c-1c59-42bd-b916-c52be0e29cc2] Running
	I0819 20:22:32.471064 1145786 system_pods.go:61] "kube-apiserver-addons-069800" [ab93cc21-de28-43c6-bc78-544bffa42d93] Running
	I0819 20:22:32.471090 1145786 system_pods.go:61] "kube-controller-manager-addons-069800" [37d8409d-cfdf-4a91-a58b-bc0be6ed9ef7] Running
	I0819 20:22:32.471111 1145786 system_pods.go:61] "kube-ingress-dns-minikube" [df60b3bf-d686-45a6-8994-a9790c8e970a] Running
	I0819 20:22:32.471132 1145786 system_pods.go:61] "kube-proxy-8tlfz" [c423b130-9be2-4a2d-b578-796feff49306] Running
	I0819 20:22:32.471153 1145786 system_pods.go:61] "kube-scheduler-addons-069800" [e5b1653b-b421-49ac-9cf3-dc131f52f8ac] Running
	I0819 20:22:32.471188 1145786 system_pods.go:61] "metrics-server-8988944d9-ldtcv" [e82f97ea-f8d4-47b6-a217-6e5677777532] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 20:22:32.471211 1145786 system_pods.go:61] "nvidia-device-plugin-daemonset-qrxrs" [7b5516bf-34eb-427f-9819-d72c940d96e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0819 20:22:32.471235 1145786 system_pods.go:61] "registry-6fb4cdfc84-gr6wk" [82b3deb3-5365-4530-b213-a092c0d9a803] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0819 20:22:32.471269 1145786 system_pods.go:61] "registry-proxy-hq5bw" [dfa32c3a-0d40-440e-885e-5723158e7561] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 20:22:32.471298 1145786 system_pods.go:61] "snapshot-controller-56fcc65765-5czwh" [e39e43d6-8664-48db-b664-21c5a3c2aa57] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 20:22:32.471323 1145786 system_pods.go:61] "snapshot-controller-56fcc65765-q4x4d" [09a896d2-d27f-45cf-ac71-44b630390ac5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 20:22:32.471344 1145786 system_pods.go:61] "storage-provisioner" [61cf2a6f-eebc-4264-a283-a245a23da4af] Running
	I0819 20:22:32.471378 1145786 system_pods.go:74] duration metric: took 127.379033ms to wait for pod list to return data ...
	I0819 20:22:32.471413 1145786 default_sa.go:34] waiting for default service account to be created ...
	I0819 20:22:32.650946 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:32.652163 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:32.662319 1145786 default_sa.go:45] found service account: "default"
	I0819 20:22:32.662393 1145786 default_sa.go:55] duration metric: took 190.959043ms for default service account to be created ...
	I0819 20:22:32.662419 1145786 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 20:22:32.756530 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:32.869358 1145786 system_pods.go:86] 18 kube-system pods found
	I0819 20:22:32.869437 1145786 system_pods.go:89] "coredns-6f6b679f8f-dqfzk" [30ee1dce-8fa4-4087-b5bf-a5d028117dc8] Running
	I0819 20:22:32.869462 1145786 system_pods.go:89] "csi-hostpath-attacher-0" [4e331384-7ca0-46af-8f8f-48fa51710733] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 20:22:32.869488 1145786 system_pods.go:89] "csi-hostpath-resizer-0" [d897c626-8a3f-4e8f-ad29-1c7ebe68fbe3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 20:22:32.869530 1145786 system_pods.go:89] "csi-hostpathplugin-wk2vn" [09397d9a-6f96-4039-9037-634d97ea4ec8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 20:22:32.869549 1145786 system_pods.go:89] "etcd-addons-069800" [4e17daa0-832b-482b-b57c-91dc33e6801a] Running
	I0819 20:22:32.869572 1145786 system_pods.go:89] "kindnet-777c5" [8c1b212c-1c59-42bd-b916-c52be0e29cc2] Running
	I0819 20:22:32.869606 1145786 system_pods.go:89] "kube-apiserver-addons-069800" [ab93cc21-de28-43c6-bc78-544bffa42d93] Running
	I0819 20:22:32.869632 1145786 system_pods.go:89] "kube-controller-manager-addons-069800" [37d8409d-cfdf-4a91-a58b-bc0be6ed9ef7] Running
	I0819 20:22:32.869652 1145786 system_pods.go:89] "kube-ingress-dns-minikube" [df60b3bf-d686-45a6-8994-a9790c8e970a] Running
	I0819 20:22:32.869675 1145786 system_pods.go:89] "kube-proxy-8tlfz" [c423b130-9be2-4a2d-b578-796feff49306] Running
	I0819 20:22:32.869709 1145786 system_pods.go:89] "kube-scheduler-addons-069800" [e5b1653b-b421-49ac-9cf3-dc131f52f8ac] Running
	I0819 20:22:32.869732 1145786 system_pods.go:89] "metrics-server-8988944d9-ldtcv" [e82f97ea-f8d4-47b6-a217-6e5677777532] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 20:22:32.869753 1145786 system_pods.go:89] "nvidia-device-plugin-daemonset-qrxrs" [7b5516bf-34eb-427f-9819-d72c940d96e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0819 20:22:32.869777 1145786 system_pods.go:89] "registry-6fb4cdfc84-gr6wk" [82b3deb3-5365-4530-b213-a092c0d9a803] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0819 20:22:32.869815 1145786 system_pods.go:89] "registry-proxy-hq5bw" [dfa32c3a-0d40-440e-885e-5723158e7561] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 20:22:32.869844 1145786 system_pods.go:89] "snapshot-controller-56fcc65765-5czwh" [e39e43d6-8664-48db-b664-21c5a3c2aa57] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 20:22:32.869868 1145786 system_pods.go:89] "snapshot-controller-56fcc65765-q4x4d" [09a896d2-d27f-45cf-ac71-44b630390ac5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 20:22:32.869890 1145786 system_pods.go:89] "storage-provisioner" [61cf2a6f-eebc-4264-a283-a245a23da4af] Running
	I0819 20:22:32.869927 1145786 system_pods.go:126] duration metric: took 207.488837ms to wait for k8s-apps to be running ...
	I0819 20:22:32.869953 1145786 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 20:22:32.870041 1145786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:22:32.883403 1145786 system_svc.go:56] duration metric: took 13.434028ms WaitForService to wait for kubelet
	I0819 20:22:32.883476 1145786 kubeadm.go:582] duration metric: took 16.52018576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 20:22:32.883510 1145786 node_conditions.go:102] verifying NodePressure condition ...
	I0819 20:22:33.063437 1145786 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 20:22:33.063519 1145786 node_conditions.go:123] node cpu capacity is 2
	I0819 20:22:33.063550 1145786 node_conditions.go:105] duration metric: took 180.017547ms to run NodePressure ...
	I0819 20:22:33.063590 1145786 start.go:241] waiting for startup goroutines ...
	I0819 20:22:33.146750 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:33.148317 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:33.256896 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:33.647449 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:33.648728 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:33.756587 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:34.150304 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:34.153161 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:34.274748 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:34.646667 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:34.647569 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:34.756317 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:35.153467 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:35.162412 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:35.257629 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:35.645972 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:35.648131 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:35.756394 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:36.146778 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:36.147968 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:36.256363 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:36.647111 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:36.648657 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:36.757064 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:37.146196 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:37.148790 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:37.256921 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:37.645987 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:37.648414 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:37.757295 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:38.144665 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:38.147652 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:38.255807 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:38.647860 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:38.649374 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:38.757504 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:39.152968 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:39.154856 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:39.256470 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:39.645895 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:39.648593 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:39.756610 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:40.148337 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:40.149902 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:40.256949 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:40.647513 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:40.647952 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:40.756046 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:41.144679 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:41.147151 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:41.256110 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:41.646205 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:41.648278 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:41.757643 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:42.148875 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:42.150917 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:42.257186 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:42.646692 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:42.648839 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:42.757210 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:43.147326 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:43.148768 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:43.265026 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:43.649178 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:43.651805 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:43.757066 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:44.145359 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:44.149226 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:44.256790 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:44.645967 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:44.648285 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:44.756788 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:45.149715 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:45.151281 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:45.257588 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:45.656928 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:45.657483 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:45.766415 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:46.145325 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:46.147836 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:46.257410 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:46.646113 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:46.647924 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:46.755922 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:47.148195 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:47.149134 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:47.256778 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:47.646695 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:47.647690 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:47.756188 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:48.146103 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:48.148344 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:48.256382 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:48.647228 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:48.648532 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:48.756814 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:49.145619 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:49.148009 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:49.256724 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:49.648698 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:49.650657 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:49.756511 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:50.146485 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:50.147919 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:50.256514 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:50.648468 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:50.649649 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:50.756061 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:51.165880 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:51.167367 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:51.257122 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:51.647030 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:51.649542 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:51.761680 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:52.146831 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:52.156488 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:52.261395 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:52.656041 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:52.657845 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:52.758539 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:53.149991 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:53.151736 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:53.257001 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:53.649125 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:53.649838 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:53.766624 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:54.149254 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:54.150420 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:54.255880 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:54.647392 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:54.649204 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:54.756125 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:55.146716 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:55.147721 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:55.256388 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:55.645735 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:55.649156 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:55.757813 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:56.146400 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:56.148102 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:56.256203 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:56.645100 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:56.647586 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:56.755815 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:57.144803 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:57.147367 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:57.256197 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:57.645513 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:57.648540 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:57.756335 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:58.148559 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:58.151663 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:58.260099 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:58.648515 1145786 kapi.go:107] duration metric: took 31.007517206s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 20:22:58.650081 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:58.756959 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:59.148338 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:59.256482 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:59.647728 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:59.756901 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:00.149901 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:00.268714 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:00.649258 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:00.757682 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:01.147293 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:01.256581 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:01.647692 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:01.758039 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:02.148310 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:02.261567 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:02.647849 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:02.755650 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:03.147239 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:03.260490 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:03.658216 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:03.758054 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:04.147743 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:04.256386 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:04.648284 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:04.756211 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:05.148752 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:05.256397 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:05.648081 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:05.755948 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:06.148579 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:06.256154 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:06.647870 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:06.756933 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:07.146680 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:07.256055 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:07.648090 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:07.756132 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:08.147573 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:08.256875 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:08.650072 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:08.756422 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:09.148285 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:09.256076 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:09.647449 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:09.757066 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:10.147307 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:10.256422 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:10.649245 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:10.755685 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:11.147663 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:11.256292 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:11.649620 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:11.756538 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:12.149231 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:12.258094 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:12.656320 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:12.755957 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:13.147170 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:13.255738 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:13.647451 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:13.755908 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:14.147756 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:14.255796 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:14.647318 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:14.756616 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:15.146856 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:15.256439 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:15.647216 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:15.756746 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:16.151464 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:16.256522 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:16.647277 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:16.757132 1145786 kapi.go:107] duration metric: took 48.505930389s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 20:23:17.147175 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:17.646498 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:18.148132 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:18.647165 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:19.147447 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:19.646966 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:20.147343 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:20.646797 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:21.147326 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:21.647131 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:22.148106 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:22.647615 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:23.147952 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:23.646792 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:24.147464 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:24.647325 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:25.148413 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:25.647417 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:26.147776 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:26.647006 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:27.147355 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:27.647230 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:28.146758 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:28.647158 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:29.148020 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:29.647585 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:30.147999 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:30.647317 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:31.151801 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:31.646756 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:32.147641 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:32.654607 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:33.147637 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:33.723758 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:34.147921 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:34.646863 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:35.147366 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:35.647276 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:36.147036 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:36.650734 1145786 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:37.148237 1145786 kapi.go:107] duration metric: took 1m11.505594041s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 20:23:52.676062 1145786 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 20:23:52.676085 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:53.167998 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:53.667981 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:54.168485 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:54.669960 1145786 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:55.168701 1145786 kapi.go:107] duration metric: took 1m25.504141867s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 20:23:55.170093 1145786 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-069800 cluster.
	I0819 20:23:55.171783 1145786 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 20:23:55.173137 1145786 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 20:23:55.174450 1145786 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, volcano, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0819 20:23:55.175819 1145786 addons.go:510] duration metric: took 1m38.812136553s for enable addons: enabled=[default-storageclass nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner volcano metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0819 20:23:55.175903 1145786 start.go:246] waiting for cluster config update ...
	I0819 20:23:55.175933 1145786 start.go:255] writing updated cluster config ...
	I0819 20:23:55.176313 1145786 ssh_runner.go:195] Run: rm -f paused
	I0819 20:23:55.517860 1145786 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 20:23:55.519757 1145786 out.go:177] * Done! kubectl is now configured to use "addons-069800" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	7d9ec3321a78c       e2d3313f65753       About a minute ago   Exited              gadget                                   5                   b51de748c8a15       gadget-42v65
	d9d04575afd3c       6ef582f3ec844       3 minutes ago        Running             gcp-auth                                 0                   c589b1de9d6c8       gcp-auth-89d5ffd79-p2b9n
	c49c336289338       289a818c8d9c5       3 minutes ago        Running             controller                               0                   557412243ebe8       ingress-nginx-controller-bc57996ff-968ff
	9f91b57ed7a91       ee6d597e62dc8       3 minutes ago        Running             csi-snapshotter                          0                   9a04205d9e217       csi-hostpathplugin-wk2vn
	b4d338c0262d2       642ded511e141       4 minutes ago        Running             csi-provisioner                          0                   9a04205d9e217       csi-hostpathplugin-wk2vn
	bda6d22bdb81a       922312104da8a       4 minutes ago        Running             liveness-probe                           0                   9a04205d9e217       csi-hostpathplugin-wk2vn
	9bcf817febcd4       08f6b2990811a       4 minutes ago        Running             hostpath                                 0                   9a04205d9e217       csi-hostpathplugin-wk2vn
	a2061a72567a5       0107d56dbc0be       4 minutes ago        Running             node-driver-registrar                    0                   9a04205d9e217       csi-hostpathplugin-wk2vn
	837d16af727cc       8b46b1cd48760       4 minutes ago        Running             admission                                0                   61c38eacca5d6       volcano-admission-77d7d48b68-ffrm8
	9446126757c37       487fa743e1e22       4 minutes ago        Running             csi-resizer                              0                   572c859f10471       csi-hostpath-resizer-0
	ea827547c63df       d9c7ad4c226bf       4 minutes ago        Running             volcano-scheduler                        0                   8842e17ff4dd2       volcano-scheduler-576bc46687-f6rfz
	95714caa79eec       9a80d518f102c       4 minutes ago        Running             csi-attacher                             0                   58a842476aef8       csi-hostpath-attacher-0
	25778701f758c       1505f556b3a7b       4 minutes ago        Running             volcano-controllers                      0                   87bac298f0697       volcano-controllers-56675bb4d5-kb97v
	9fabda3bc93dd       1461903ec4fe9       4 minutes ago        Running             csi-external-health-monitor-controller   0                   9a04205d9e217       csi-hostpathplugin-wk2vn
	e6116ec42ed1e       420193b27261a       4 minutes ago        Exited              patch                                    1                   f67c255e983cb       ingress-nginx-admission-patch-b6mnf
	48d906f35f7a0       420193b27261a       4 minutes ago        Exited              create                                   0                   67c0a308e9ef1       ingress-nginx-admission-create-5qkx2
	809a760ade030       3410e1561990a       4 minutes ago        Running             registry-proxy                           0                   35e4634dbc9fe       registry-proxy-hq5bw
	899f131862eae       6fed88f43b276       4 minutes ago        Running             registry                                 0                   5dd43b5bfb221       registry-6fb4cdfc84-gr6wk
	eace09f071ecd       95dccb4df54ab       4 minutes ago        Running             metrics-server                           0                   56e5353fe08bc       metrics-server-8988944d9-ldtcv
	9d645052cb732       4d1e5c3e97420       4 minutes ago        Running             volume-snapshot-controller               0                   f589a4a7864b5       snapshot-controller-56fcc65765-q4x4d
	4227fca29a365       4d1e5c3e97420       4 minutes ago        Running             volume-snapshot-controller               0                   1e29a2c3fa840       snapshot-controller-56fcc65765-5czwh
	455f953f56a16       7ce2150c8929b       4 minutes ago        Running             local-path-provisioner                   0                   be023e2771859       local-path-provisioner-86d989889c-45q8m
	7b4f31b8f8f2f       77bdba588b953       4 minutes ago        Running             yakd                                     0                   57d8fd56953fe       yakd-dashboard-67d98fc6b-5lkpw
	c179c22a34599       a9bac31a5be8d       4 minutes ago        Running             nvidia-device-plugin-ctr                 0                   499603cd3476d       nvidia-device-plugin-daemonset-qrxrs
	3d29bbed223c9       53af6e2c4c343       4 minutes ago        Running             cloud-spanner-emulator                   0                   c2d3e85bf6e74       cloud-spanner-emulator-c4bc9b5f8-wv6n4
	c6341bf52625f       35508c2f890c4       4 minutes ago        Running             minikube-ingress-dns                     0                   2f6f7c317069a       kube-ingress-dns-minikube
	5c926fc1840f9       2437cf7621777       4 minutes ago        Running             coredns                                  0                   668bfd202a27b       coredns-6f6b679f8f-dqfzk
	dcb30357b82af       ba04bb24b9575       4 minutes ago        Running             storage-provisioner                      0                   f9e2ad3ee1141       storage-provisioner
	449c906227d59       6a23fa8fd2b78       4 minutes ago        Running             kindnet-cni                              0                   e3fa922b1cd8c       kindnet-777c5
	00b8deab39e72       71d55d66fd4ee       4 minutes ago        Running             kube-proxy                               0                   6e4200256ff87       kube-proxy-8tlfz
	433a2ab189de0       cd0f0ae0ec9e0       5 minutes ago        Running             kube-apiserver                           0                   3e48603d7d2ce       kube-apiserver-addons-069800
	97172d95cae12       fbbbd428abb4d       5 minutes ago        Running             kube-scheduler                           0                   9c518c861cf8a       kube-scheduler-addons-069800
	69292e83ebf4d       27e3830e14027       5 minutes ago        Running             etcd                                     0                   a82c47673f95a       etcd-addons-069800
	bc62adb61e2f9       fcb0683e6bdbd       5 minutes ago        Running             kube-controller-manager                  0                   2197cf5fc0216       kube-controller-manager-addons-069800
	
	
	==> containerd <==
	Aug 19 20:25:11 addons-069800 containerd[815]: time="2024-08-19T20:25:11.090593465Z" level=info msg="RemoveContainer for \"eea1de65e8d536d76c0102cfde8d2d5a8b1c25e15b9a0cc0e745f85d77d0328c\""
	Aug 19 20:25:11 addons-069800 containerd[815]: time="2024-08-19T20:25:11.098195846Z" level=info msg="RemoveContainer for \"eea1de65e8d536d76c0102cfde8d2d5a8b1c25e15b9a0cc0e745f85d77d0328c\" returns successfully"
	Aug 19 20:25:11 addons-069800 containerd[815]: time="2024-08-19T20:25:11.100596172Z" level=info msg="StopPodSandbox for \"eb8bae1b36758c5d2733d91f5d346bbb5852a223bedbc88956ff4b7b4714bd8e\""
	Aug 19 20:25:11 addons-069800 containerd[815]: time="2024-08-19T20:25:11.109623674Z" level=info msg="TearDown network for sandbox \"eb8bae1b36758c5d2733d91f5d346bbb5852a223bedbc88956ff4b7b4714bd8e\" successfully"
	Aug 19 20:25:11 addons-069800 containerd[815]: time="2024-08-19T20:25:11.109674872Z" level=info msg="StopPodSandbox for \"eb8bae1b36758c5d2733d91f5d346bbb5852a223bedbc88956ff4b7b4714bd8e\" returns successfully"
	Aug 19 20:25:11 addons-069800 containerd[815]: time="2024-08-19T20:25:11.110455366Z" level=info msg="RemovePodSandbox for \"eb8bae1b36758c5d2733d91f5d346bbb5852a223bedbc88956ff4b7b4714bd8e\""
	Aug 19 20:25:11 addons-069800 containerd[815]: time="2024-08-19T20:25:11.110503980Z" level=info msg="Forcibly stopping sandbox \"eb8bae1b36758c5d2733d91f5d346bbb5852a223bedbc88956ff4b7b4714bd8e\""
	Aug 19 20:25:11 addons-069800 containerd[815]: time="2024-08-19T20:25:11.127497652Z" level=info msg="TearDown network for sandbox \"eb8bae1b36758c5d2733d91f5d346bbb5852a223bedbc88956ff4b7b4714bd8e\" successfully"
	Aug 19 20:25:11 addons-069800 containerd[815]: time="2024-08-19T20:25:11.134879081Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb8bae1b36758c5d2733d91f5d346bbb5852a223bedbc88956ff4b7b4714bd8e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 19 20:25:11 addons-069800 containerd[815]: time="2024-08-19T20:25:11.135313270Z" level=info msg="RemovePodSandbox \"eb8bae1b36758c5d2733d91f5d346bbb5852a223bedbc88956ff4b7b4714bd8e\" returns successfully"
	Aug 19 20:25:45 addons-069800 containerd[815]: time="2024-08-19T20:25:45.987113654Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\""
	Aug 19 20:25:46 addons-069800 containerd[815]: time="2024-08-19T20:25:46.135663354Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 19 20:25:46 addons-069800 containerd[815]: time="2024-08-19T20:25:46.137464278Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: active requests=0, bytes read=89"
	Aug 19 20:25:46 addons-069800 containerd[815]: time="2024-08-19T20:25:46.141379028Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" with image id \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\", size \"69907666\" in 154.215808ms"
	Aug 19 20:25:46 addons-069800 containerd[815]: time="2024-08-19T20:25:46.141429447Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" returns image reference \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\""
	Aug 19 20:25:46 addons-069800 containerd[815]: time="2024-08-19T20:25:46.143934181Z" level=info msg="CreateContainer within sandbox \"b51de748c8a15b4de884edafff41d783cf535be712fa457108d5f4325cc7f6d4\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Aug 19 20:25:46 addons-069800 containerd[815]: time="2024-08-19T20:25:46.166945428Z" level=info msg="CreateContainer within sandbox \"b51de748c8a15b4de884edafff41d783cf535be712fa457108d5f4325cc7f6d4\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1\""
	Aug 19 20:25:46 addons-069800 containerd[815]: time="2024-08-19T20:25:46.167593904Z" level=info msg="StartContainer for \"7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1\""
	Aug 19 20:25:46 addons-069800 containerd[815]: time="2024-08-19T20:25:46.237480769Z" level=info msg="StartContainer for \"7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1\" returns successfully"
	Aug 19 20:25:47 addons-069800 containerd[815]: time="2024-08-19T20:25:47.549865355Z" level=info msg="shim disconnected" id=7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1 namespace=k8s.io
	Aug 19 20:25:47 addons-069800 containerd[815]: time="2024-08-19T20:25:47.549924348Z" level=warning msg="cleaning up after shim disconnected" id=7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1 namespace=k8s.io
	Aug 19 20:25:47 addons-069800 containerd[815]: time="2024-08-19T20:25:47.549934260Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 19 20:25:47 addons-069800 containerd[815]: time="2024-08-19T20:25:47.563434464Z" level=warning msg="cleanup warnings time=\"2024-08-19T20:25:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Aug 19 20:25:48 addons-069800 containerd[815]: time="2024-08-19T20:25:48.081369829Z" level=info msg="RemoveContainer for \"27cdb1e0fd143236a6c37651bcf493e8c15e7bd05e791bfba893148554f759e1\""
	Aug 19 20:25:48 addons-069800 containerd[815]: time="2024-08-19T20:25:48.089922077Z" level=info msg="RemoveContainer for \"27cdb1e0fd143236a6c37651bcf493e8c15e7bd05e791bfba893148554f759e1\" returns successfully"
	
	
	==> coredns [5c926fc1840f998fd79321c007891e7b05ca7a88ff0c0dd1f827c2294f68a319] <==
	[INFO] 10.244.0.11:57311 - 62333 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073393s
	[INFO] 10.244.0.11:38555 - 41024 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002809636s
	[INFO] 10.244.0.11:38555 - 12867 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002560962s
	[INFO] 10.244.0.11:49254 - 1832 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000132641s
	[INFO] 10.244.0.11:49254 - 811 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000315315s
	[INFO] 10.244.0.11:40039 - 59152 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000113351s
	[INFO] 10.244.0.11:40039 - 59165 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000042994s
	[INFO] 10.244.0.11:48168 - 53220 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000064745s
	[INFO] 10.244.0.11:48168 - 4698 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000050994s
	[INFO] 10.244.0.11:60771 - 3835 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000062218s
	[INFO] 10.244.0.11:60771 - 16121 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041394s
	[INFO] 10.244.0.11:53364 - 15387 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001560945s
	[INFO] 10.244.0.11:53364 - 37397 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002677537s
	[INFO] 10.244.0.11:39775 - 10874 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092978s
	[INFO] 10.244.0.11:39775 - 22535 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000063046s
	[INFO] 10.244.0.24:45329 - 9940 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002248576s
	[INFO] 10.244.0.24:33073 - 27343 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002037192s
	[INFO] 10.244.0.24:58444 - 60672 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000173288s
	[INFO] 10.244.0.24:45021 - 41316 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000087653s
	[INFO] 10.244.0.24:39313 - 3243 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090689s
	[INFO] 10.244.0.24:47206 - 22977 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090295s
	[INFO] 10.244.0.24:55439 - 44917 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002041862s
	[INFO] 10.244.0.24:48617 - 53635 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001745138s
	[INFO] 10.244.0.24:57214 - 6870 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000736253s
	[INFO] 10.244.0.24:49056 - 40112 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001173444s
	
	
	==> describe nodes <==
	Name:               addons-069800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-069800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7253360125032c7e2214e25ff4b5c894ae5844e8
	                    minikube.k8s.io/name=addons-069800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T20_22_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-069800
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-069800"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 20:22:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-069800
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 20:27:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 20:24:13 +0000   Mon, 19 Aug 2024 20:22:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 20:24:13 +0000   Mon, 19 Aug 2024 20:22:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 20:24:13 +0000   Mon, 19 Aug 2024 20:22:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 20:24:13 +0000   Mon, 19 Aug 2024 20:22:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-069800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ed74c93fc3446f59b8311cdb57008b0
	  System UUID:                4a9c6137-c197-44e5-8215-65b604e33fd0
	  Boot ID:                    b7846bbc-2ca5-4e44-8ea6-94e5c03d25fd
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-c4bc9b5f8-wv6n4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  gadget                      gadget-42v65                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  gcp-auth                    gcp-auth-89d5ffd79-p2b9n                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-968ff    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m50s
	  kube-system                 coredns-6f6b679f8f-dqfzk                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m59s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 csi-hostpathplugin-wk2vn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 etcd-addons-069800                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m4s
	  kube-system                 kindnet-777c5                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m59s
	  kube-system                 kube-apiserver-addons-069800                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-controller-manager-addons-069800       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-proxy-8tlfz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-scheduler-addons-069800                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 metrics-server-8988944d9-ldtcv              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m52s
	  kube-system                 nvidia-device-plugin-daemonset-qrxrs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 registry-6fb4cdfc84-gr6wk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 registry-proxy-hq5bw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 snapshot-controller-56fcc65765-5czwh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 snapshot-controller-56fcc65765-q4x4d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  local-path-storage          local-path-provisioner-86d989889c-45q8m     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  volcano-system              volcano-admission-77d7d48b68-ffrm8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  volcano-system              volcano-controllers-56675bb4d5-kb97v        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  volcano-system              volcano-scheduler-576bc46687-f6rfz          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-5lkpw              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m57s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m11s (x8 over 5m11s)  kubelet          Node addons-069800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m11s (x7 over 5m11s)  kubelet          Node addons-069800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m11s (x7 over 5m11s)  kubelet          Node addons-069800 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 5m5s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m5s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  5m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5m4s                   kubelet          Node addons-069800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m4s                   kubelet          Node addons-069800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m4s                   kubelet          Node addons-069800 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m                     node-controller  Node addons-069800 event: Registered Node addons-069800 in Controller
	
	
	==> dmesg <==
	[Aug19 19:46] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [69292e83ebf4d192a8beea0596e74861db33a106022ac8959cf7bdb136539b03] <==
	{"level":"info","ts":"2024-08-19T20:22:05.104618Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T20:22:05.104699Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T20:22:05.104766Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T20:22:05.106292Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T20:22:05.106354Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T20:22:05.872273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-19T20:22:05.872506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T20:22:05.872624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-19T20:22:05.872746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T20:22:05.872892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-19T20:22:05.872980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-19T20:22:05.873062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-19T20:22:05.876392Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-069800 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T20:22:05.876674Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T20:22:05.877086Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T20:22:05.880237Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T20:22:05.881015Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T20:22:05.882024Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T20:22:05.884299Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T20:22:05.884555Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T20:22:05.884662Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T20:22:05.910274Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T20:22:05.910486Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T20:22:05.911207Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T20:22:05.912327Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [d9d04575afd3c74ca5663b570a8b419d4bb4a6813c42c213c4cacaf9fd248438] <==
	2024/08/19 20:23:54 GCP Auth Webhook started!
	2024/08/19 20:24:12 Ready to marshal response ...
	2024/08/19 20:24:12 Ready to write response ...
	2024/08/19 20:24:13 Ready to marshal response ...
	2024/08/19 20:24:13 Ready to write response ...
	
	
	==> kernel <==
	 20:27:15 up  4:09,  0 users,  load average: 0.19, 1.33, 2.51
	Linux addons-069800 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [449c906227d59601b882fdd7497e59129868da1bd1df285adf5125d25d32d305] <==
	E0819 20:26:05.370468       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 20:26:09.497501       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:26:09.497539       1 main.go:299] handling current node
	W0819 20:26:13.985049       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 20:26:13.985086       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 20:26:19.497355       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:26:19.497389       1 main.go:299] handling current node
	I0819 20:26:29.497308       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:26:29.497346       1 main.go:299] handling current node
	I0819 20:26:39.497148       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:26:39.497180       1 main.go:299] handling current node
	W0819 20:26:42.358295       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 20:26:42.358332       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 20:26:49.497625       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:26:49.497678       1 main.go:299] handling current node
	W0819 20:26:51.009329       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 20:26:51.009546       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0819 20:26:55.794340       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 20:26:55.794375       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 20:26:59.497080       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:26:59.497116       1 main.go:299] handling current node
	I0819 20:27:09.497506       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:27:09.497546       1 main.go:299] handling current node
	W0819 20:27:15.538796       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 20:27:15.538836       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	
	
	==> kube-apiserver [433a2ab189de0f8bb54bf1be2433ec7f75649e9a7168ae67add1bb3f75267afb] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0819 20:23:03.542007       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.28.152:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.28.152:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.28.152:443: connect: connection refused" logger="UnhandledError"
	E0819 20:23:03.547990       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.28.152:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.28.152:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.28.152:443: connect: connection refused" logger="UnhandledError"
	I0819 20:23:03.676436       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0819 20:23:06.758210       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.108.20:443: connect: connection refused
	W0819 20:23:07.779656       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.108.20:443: connect: connection refused
	W0819 20:23:08.830492       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.108.20:443: connect: connection refused
	W0819 20:23:09.859810       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.108.20:443: connect: connection refused
	W0819 20:23:10.871677       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.108.20:443: connect: connection refused
	W0819 20:23:11.639707       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.150.133:443: connect: connection refused
	E0819 20:23:11.639750       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.150.133:443: connect: connection refused" logger="UnhandledError"
	W0819 20:23:11.641468       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.103.108.20:443: connect: connection refused
	W0819 20:23:11.893788       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.108.20:443: connect: connection refused
	W0819 20:23:12.908383       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.108.20:443: connect: connection refused
	W0819 20:23:13.973819       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.108.20:443: connect: connection refused
	W0819 20:23:14.983325       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.108.20:443: connect: connection refused
	W0819 20:23:32.611893       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.150.133:443: connect: connection refused
	E0819 20:23:32.611952       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.150.133:443: connect: connection refused" logger="UnhandledError"
	W0819 20:23:32.681459       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.150.133:443: connect: connection refused
	E0819 20:23:32.681495       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.150.133:443: connect: connection refused" logger="UnhandledError"
	W0819 20:23:52.607013       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.150.133:443: connect: connection refused
	E0819 20:23:52.607056       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.150.133:443: connect: connection refused" logger="UnhandledError"
	I0819 20:24:13.064527       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0819 20:24:13.106444       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [bc62adb61e2f9b6c7da9856b4991d5a5d002827f1c0dc6c29f688d4fbcd4b3ba] <==
	I0819 20:23:34.664460       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 20:23:34.682716       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 20:23:36.351576       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 20:23:36.377624       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 20:23:36.712516       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="113.351µs"
	I0819 20:23:37.358839       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 20:23:37.368533       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 20:23:37.377590       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 20:23:37.387246       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 20:23:37.397454       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 20:23:37.404385       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 20:23:42.422403       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-069800"
	I0819 20:23:49.764406       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="10.230351ms"
	I0819 20:23:49.764890       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="82.55µs"
	I0819 20:23:52.632282       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="27.816032ms"
	I0819 20:23:52.680512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="48.180826ms"
	I0819 20:23:52.682232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="67.206µs"
	I0819 20:23:54.794003       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="11.215024ms"
	I0819 20:23:54.794464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="59.887µs"
	I0819 20:24:07.022071       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0819 20:24:07.023385       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0819 20:24:07.073543       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0819 20:24:07.073637       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0819 20:24:12.771676       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0819 20:24:13.285215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-069800"
	
	
	==> kube-proxy [00b8deab39e72919cc2f5fe94bf04fd8e6e9ace3854fdf03b44dac52e3146a73] <==
	I0819 20:22:17.367451       1 server_linux.go:66] "Using iptables proxy"
	I0819 20:22:17.477681       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 20:22:17.477766       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 20:22:17.546556       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 20:22:17.546615       1 server_linux.go:169] "Using iptables Proxier"
	I0819 20:22:17.553181       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 20:22:17.553692       1 server.go:483] "Version info" version="v1.31.0"
	I0819 20:22:17.553709       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 20:22:17.555696       1 config.go:326] "Starting node config controller"
	I0819 20:22:17.555713       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 20:22:17.559352       1 config.go:197] "Starting service config controller"
	I0819 20:22:17.559373       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 20:22:17.559407       1 config.go:104] "Starting endpoint slice config controller"
	I0819 20:22:17.559412       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 20:22:17.656668       1 shared_informer.go:320] Caches are synced for node config
	I0819 20:22:17.662192       1 shared_informer.go:320] Caches are synced for service config
	I0819 20:22:17.662275       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [97172d95cae129079adee77845a861002d8bfeddac8d884976221a0016077153] <==
	W0819 20:22:08.576616       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 20:22:08.576756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:09.466952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 20:22:09.467214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:09.469990       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 20:22:09.471156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:09.472490       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 20:22:09.472957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:09.477110       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 20:22:09.477631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:09.480086       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 20:22:09.480300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:09.484416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 20:22:09.484612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:09.497673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 20:22:09.497719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:09.581490       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 20:22:09.581538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:09.612581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 20:22:09.612631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:09.617183       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 20:22:09.617418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:09.760700       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 20:22:09.760911       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 20:22:11.554907       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 20:25:45 addons-069800 kubelet[1477]: I0819 20:25:45.985851    1477 scope.go:117] "RemoveContainer" containerID="27cdb1e0fd143236a6c37651bcf493e8c15e7bd05e791bfba893148554f759e1"
	Aug 19 20:25:48 addons-069800 kubelet[1477]: I0819 20:25:48.079573    1477 scope.go:117] "RemoveContainer" containerID="27cdb1e0fd143236a6c37651bcf493e8c15e7bd05e791bfba893148554f759e1"
	Aug 19 20:25:48 addons-069800 kubelet[1477]: I0819 20:25:48.080760    1477 scope.go:117] "RemoveContainer" containerID="7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1"
	Aug 19 20:25:48 addons-069800 kubelet[1477]: E0819 20:25:48.081085    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-42v65_gadget(3ead55b4-b3a1-4534-a5c8-5f8979f60bb4)\"" pod="gadget/gadget-42v65" podUID="3ead55b4-b3a1-4534-a5c8-5f8979f60bb4"
	Aug 19 20:25:49 addons-069800 kubelet[1477]: I0819 20:25:49.084183    1477 scope.go:117] "RemoveContainer" containerID="7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1"
	Aug 19 20:25:49 addons-069800 kubelet[1477]: E0819 20:25:49.084527    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-42v65_gadget(3ead55b4-b3a1-4534-a5c8-5f8979f60bb4)\"" pod="gadget/gadget-42v65" podUID="3ead55b4-b3a1-4534-a5c8-5f8979f60bb4"
	Aug 19 20:25:50 addons-069800 kubelet[1477]: I0819 20:25:50.086795    1477 scope.go:117] "RemoveContainer" containerID="7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1"
	Aug 19 20:25:50 addons-069800 kubelet[1477]: E0819 20:25:50.087036    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-42v65_gadget(3ead55b4-b3a1-4534-a5c8-5f8979f60bb4)\"" pod="gadget/gadget-42v65" podUID="3ead55b4-b3a1-4534-a5c8-5f8979f60bb4"
	Aug 19 20:26:00 addons-069800 kubelet[1477]: I0819 20:26:00.986773    1477 scope.go:117] "RemoveContainer" containerID="7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1"
	Aug 19 20:26:00 addons-069800 kubelet[1477]: E0819 20:26:00.987647    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-42v65_gadget(3ead55b4-b3a1-4534-a5c8-5f8979f60bb4)\"" pod="gadget/gadget-42v65" podUID="3ead55b4-b3a1-4534-a5c8-5f8979f60bb4"
	Aug 19 20:26:15 addons-069800 kubelet[1477]: I0819 20:26:15.985790    1477 scope.go:117] "RemoveContainer" containerID="7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1"
	Aug 19 20:26:15 addons-069800 kubelet[1477]: E0819 20:26:15.986019    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-42v65_gadget(3ead55b4-b3a1-4534-a5c8-5f8979f60bb4)\"" pod="gadget/gadget-42v65" podUID="3ead55b4-b3a1-4534-a5c8-5f8979f60bb4"
	Aug 19 20:26:30 addons-069800 kubelet[1477]: I0819 20:26:30.987311    1477 scope.go:117] "RemoveContainer" containerID="7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1"
	Aug 19 20:26:30 addons-069800 kubelet[1477]: E0819 20:26:30.988151    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-42v65_gadget(3ead55b4-b3a1-4534-a5c8-5f8979f60bb4)\"" pod="gadget/gadget-42v65" podUID="3ead55b4-b3a1-4534-a5c8-5f8979f60bb4"
	Aug 19 20:26:33 addons-069800 kubelet[1477]: I0819 20:26:33.984901    1477 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-hq5bw" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 20:26:41 addons-069800 kubelet[1477]: I0819 20:26:41.985682    1477 scope.go:117] "RemoveContainer" containerID="7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1"
	Aug 19 20:26:41 addons-069800 kubelet[1477]: E0819 20:26:41.985912    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-42v65_gadget(3ead55b4-b3a1-4534-a5c8-5f8979f60bb4)\"" pod="gadget/gadget-42v65" podUID="3ead55b4-b3a1-4534-a5c8-5f8979f60bb4"
	Aug 19 20:26:45 addons-069800 kubelet[1477]: I0819 20:26:45.985290    1477 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-gr6wk" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 20:26:47 addons-069800 kubelet[1477]: I0819 20:26:47.984948    1477 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-qrxrs" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 20:26:52 addons-069800 kubelet[1477]: I0819 20:26:52.987307    1477 scope.go:117] "RemoveContainer" containerID="7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1"
	Aug 19 20:26:52 addons-069800 kubelet[1477]: E0819 20:26:52.988091    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-42v65_gadget(3ead55b4-b3a1-4534-a5c8-5f8979f60bb4)\"" pod="gadget/gadget-42v65" podUID="3ead55b4-b3a1-4534-a5c8-5f8979f60bb4"
	Aug 19 20:27:03 addons-069800 kubelet[1477]: I0819 20:27:03.985761    1477 scope.go:117] "RemoveContainer" containerID="7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1"
	Aug 19 20:27:03 addons-069800 kubelet[1477]: E0819 20:27:03.985988    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-42v65_gadget(3ead55b4-b3a1-4534-a5c8-5f8979f60bb4)\"" pod="gadget/gadget-42v65" podUID="3ead55b4-b3a1-4534-a5c8-5f8979f60bb4"
	Aug 19 20:27:14 addons-069800 kubelet[1477]: I0819 20:27:14.987368    1477 scope.go:117] "RemoveContainer" containerID="7d9ec3321a78c6fce0f8bf92e704f032683d6ec8dcf9e221dc6c672ba1b8e3f1"
	Aug 19 20:27:14 addons-069800 kubelet[1477]: E0819 20:27:14.987545    1477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-42v65_gadget(3ead55b4-b3a1-4534-a5c8-5f8979f60bb4)\"" pod="gadget/gadget-42v65" podUID="3ead55b4-b3a1-4534-a5c8-5f8979f60bb4"
	
	
	==> storage-provisioner [dcb30357b82afed78f7862d508dd92951d27359e5701ad68dce24c9b1b53146c] <==
	I0819 20:22:22.074963       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 20:22:22.099565       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 20:22:22.099620       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 20:22:22.119706       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 20:22:22.121818       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-069800_71c43b1c-a954-46d8-b21c-d1f7d47e838e!
	I0819 20:22:22.128770       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e3c2abb4-4dca-4ebf-a15e-f4eef15a6da4", APIVersion:"v1", ResourceVersion:"532", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-069800_71c43b1c-a954-46d8-b21c-d1f7d47e838e became leader
	I0819 20:22:22.223940       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-069800_71c43b1c-a954-46d8-b21c-d1f7d47e838e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-069800 -n addons-069800
helpers_test.go:261: (dbg) Run:  kubectl --context addons-069800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-5qkx2 ingress-nginx-admission-patch-b6mnf test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-069800 describe pod ingress-nginx-admission-create-5qkx2 ingress-nginx-admission-patch-b6mnf test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-069800 describe pod ingress-nginx-admission-create-5qkx2 ingress-nginx-admission-patch-b6mnf test-job-nginx-0: exit status 1 (93.470807ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5qkx2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-b6mnf" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-069800 describe pod ingress-nginx-admission-create-5qkx2 ingress-nginx-admission-patch-b6mnf test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (201.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (384.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-127648 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0819 21:11:09.997034 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-127648 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m20.209385911s)

                                                
                                                
-- stdout --
	* [old-k8s-version-127648] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-127648" primary control-plane node in "old-k8s-version-127648" cluster
	* Pulling base image v0.0.44-1723740748-19452 ...
	* Restarting existing docker container for "old-k8s-version-127648" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-127648 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 21:10:31.807433 1351451 out.go:345] Setting OutFile to fd 1 ...
	I0819 21:10:31.807677 1351451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 21:10:31.807706 1351451 out.go:358] Setting ErrFile to fd 2...
	I0819 21:10:31.807727 1351451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 21:10:31.808011 1351451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
	I0819 21:10:31.808472 1351451 out.go:352] Setting JSON to false
	I0819 21:10:31.812943 1351451 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17579,"bootTime":1724084253,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 21:10:31.813059 1351451 start.go:139] virtualization:  
	I0819 21:10:31.816415 1351451 out.go:177] * [old-k8s-version-127648] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 21:10:31.820324 1351451 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 21:10:31.820429 1351451 notify.go:220] Checking for updates...
	I0819 21:10:31.825901 1351451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 21:10:31.829739 1351451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	I0819 21:10:31.832641 1351451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	I0819 21:10:31.835290 1351451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 21:10:31.838537 1351451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 21:10:31.842036 1351451 config.go:182] Loaded profile config "old-k8s-version-127648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0819 21:10:31.845959 1351451 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 21:10:31.848844 1351451 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 21:10:31.898759 1351451 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 21:10:31.898951 1351451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 21:10:32.012154 1351451 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-19 21:10:31.994921597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 21:10:32.012318 1351451 docker.go:307] overlay module found
	I0819 21:10:32.016091 1351451 out.go:177] * Using the docker driver based on existing profile
	I0819 21:10:32.019003 1351451 start.go:297] selected driver: docker
	I0819 21:10:32.019029 1351451 start.go:901] validating driver "docker" against &{Name:old-k8s-version-127648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-127648 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 21:10:32.019138 1351451 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 21:10:32.019798 1351451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 21:10:32.125508 1351451 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-19 21:10:32.115993209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 21:10:32.125863 1351451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 21:10:32.125892 1351451 cni.go:84] Creating CNI manager for ""
	I0819 21:10:32.125901 1351451 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 21:10:32.125942 1351451 start.go:340] cluster config:
	{Name:old-k8s-version-127648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-127648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 21:10:32.128947 1351451 out.go:177] * Starting "old-k8s-version-127648" primary control-plane node in "old-k8s-version-127648" cluster
	I0819 21:10:32.131505 1351451 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 21:10:32.134321 1351451 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 21:10:32.137090 1351451 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 21:10:32.137163 1351451 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-1139612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0819 21:10:32.137179 1351451 cache.go:56] Caching tarball of preloaded images
	I0819 21:10:32.137275 1351451 preload.go:172] Found /home/jenkins/minikube-integration/19423-1139612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 21:10:32.137291 1351451 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0819 21:10:32.137413 1351451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/config.json ...
	I0819 21:10:32.137643 1351451 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	W0819 21:10:32.177066 1351451 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0819 21:10:32.177085 1351451 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 21:10:32.177159 1351451 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 21:10:32.177175 1351451 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 21:10:32.177180 1351451 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 21:10:32.177188 1351451 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 21:10:32.177194 1351451 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 21:10:32.321959 1351451 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 21:10:32.321991 1351451 cache.go:194] Successfully downloaded all kic artifacts
	I0819 21:10:32.322030 1351451 start.go:360] acquireMachinesLock for old-k8s-version-127648: {Name:mkb8c05c7d87494b229aca13a160b294c80d34e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 21:10:32.322098 1351451 start.go:364] duration metric: took 41.928µs to acquireMachinesLock for "old-k8s-version-127648"
	I0819 21:10:32.322119 1351451 start.go:96] Skipping create...Using existing machine configuration
	I0819 21:10:32.322125 1351451 fix.go:54] fixHost starting: 
	I0819 21:10:32.322411 1351451 cli_runner.go:164] Run: docker container inspect old-k8s-version-127648 --format={{.State.Status}}
	I0819 21:10:32.353925 1351451 fix.go:112] recreateIfNeeded on old-k8s-version-127648: state=Stopped err=<nil>
	W0819 21:10:32.353954 1351451 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 21:10:32.357075 1351451 out.go:177] * Restarting existing docker container for "old-k8s-version-127648" ...
	I0819 21:10:32.360143 1351451 cli_runner.go:164] Run: docker start old-k8s-version-127648
	I0819 21:10:32.818714 1351451 cli_runner.go:164] Run: docker container inspect old-k8s-version-127648 --format={{.State.Status}}
	I0819 21:10:32.853351 1351451 kic.go:430] container "old-k8s-version-127648" state is running.
	I0819 21:10:32.853751 1351451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-127648
	I0819 21:10:32.894375 1351451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/config.json ...
	I0819 21:10:32.894617 1351451 machine.go:93] provisionDockerMachine start ...
	I0819 21:10:32.894690 1351451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-127648
	I0819 21:10:32.930659 1351451 main.go:141] libmachine: Using SSH client type: native
	I0819 21:10:32.930921 1351451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I0819 21:10:32.930951 1351451 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 21:10:32.931722 1351451 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58280->127.0.0.1:34223: read: connection reset by peer
	I0819 21:10:36.112275 1351451 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-127648
	
	I0819 21:10:36.112304 1351451 ubuntu.go:169] provisioning hostname "old-k8s-version-127648"
	I0819 21:10:36.112378 1351451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-127648
	I0819 21:10:36.149973 1351451 main.go:141] libmachine: Using SSH client type: native
	I0819 21:10:36.150287 1351451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I0819 21:10:36.150299 1351451 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-127648 && echo "old-k8s-version-127648" | sudo tee /etc/hostname
	I0819 21:10:36.341592 1351451 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-127648
	
	I0819 21:10:36.341695 1351451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-127648
	I0819 21:10:36.399653 1351451 main.go:141] libmachine: Using SSH client type: native
	I0819 21:10:36.399919 1351451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I0819 21:10:36.399940 1351451 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-127648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-127648/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-127648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 21:10:36.553669 1351451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 21:10:36.553699 1351451 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19423-1139612/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-1139612/.minikube}
	I0819 21:10:36.553743 1351451 ubuntu.go:177] setting up certificates
	I0819 21:10:36.553753 1351451 provision.go:84] configureAuth start
	I0819 21:10:36.553825 1351451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-127648
	I0819 21:10:36.580339 1351451 provision.go:143] copyHostCerts
	I0819 21:10:36.580419 1351451 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1139612/.minikube/key.pem, removing ...
	I0819 21:10:36.580434 1351451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1139612/.minikube/key.pem
	I0819 21:10:36.580510 1351451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-1139612/.minikube/key.pem (1675 bytes)
	I0819 21:10:36.580609 1351451 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.pem, removing ...
	I0819 21:10:36.580619 1351451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.pem
	I0819 21:10:36.580651 1351451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.pem (1078 bytes)
	I0819 21:10:36.580720 1351451 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1139612/.minikube/cert.pem, removing ...
	I0819 21:10:36.580731 1351451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1139612/.minikube/cert.pem
	I0819 21:10:36.580756 1351451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-1139612/.minikube/cert.pem (1123 bytes)
	I0819 21:10:36.580810 1351451 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-127648 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-127648]
	I0819 21:10:37.201475 1351451 provision.go:177] copyRemoteCerts
	I0819 21:10:37.201558 1351451 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 21:10:37.201602 1351451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-127648
	I0819 21:10:37.239994 1351451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/old-k8s-version-127648/id_rsa Username:docker}
	I0819 21:10:37.345867 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 21:10:37.394048 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 21:10:37.436502 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 21:10:37.474592 1351451 provision.go:87] duration metric: took 920.814332ms to configureAuth
	I0819 21:10:37.474699 1351451 ubuntu.go:193] setting minikube options for container-runtime
	I0819 21:10:37.475040 1351451 config.go:182] Loaded profile config "old-k8s-version-127648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0819 21:10:37.475105 1351451 machine.go:96] duration metric: took 4.580473033s to provisionDockerMachine
	I0819 21:10:37.475140 1351451 start.go:293] postStartSetup for "old-k8s-version-127648" (driver="docker")
	I0819 21:10:37.475187 1351451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 21:10:37.475326 1351451 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 21:10:37.475424 1351451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-127648
	I0819 21:10:37.506393 1351451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/old-k8s-version-127648/id_rsa Username:docker}
	I0819 21:10:37.610991 1351451 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 21:10:37.616730 1351451 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 21:10:37.616764 1351451 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 21:10:37.616774 1351451 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 21:10:37.616781 1351451 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 21:10:37.616791 1351451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1139612/.minikube/addons for local assets ...
	I0819 21:10:37.616866 1351451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1139612/.minikube/files for local assets ...
	I0819 21:10:37.616958 1351451 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-1139612/.minikube/files/etc/ssl/certs/11450182.pem -> 11450182.pem in /etc/ssl/certs
	I0819 21:10:37.617071 1351451 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 21:10:37.627267 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/files/etc/ssl/certs/11450182.pem --> /etc/ssl/certs/11450182.pem (1708 bytes)
	I0819 21:10:37.659451 1351451 start.go:296] duration metric: took 184.235206ms for postStartSetup
	I0819 21:10:37.659616 1351451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 21:10:37.659704 1351451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-127648
	I0819 21:10:37.693216 1351451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/old-k8s-version-127648/id_rsa Username:docker}
	I0819 21:10:37.790740 1351451 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 21:10:37.799325 1351451 fix.go:56] duration metric: took 5.477191693s for fixHost
	I0819 21:10:37.799349 1351451 start.go:83] releasing machines lock for "old-k8s-version-127648", held for 5.477242326s
	I0819 21:10:37.799442 1351451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-127648
	I0819 21:10:37.826333 1351451 ssh_runner.go:195] Run: cat /version.json
	I0819 21:10:37.826367 1351451 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 21:10:37.826407 1351451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-127648
	I0819 21:10:37.826441 1351451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-127648
	I0819 21:10:37.858615 1351451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/old-k8s-version-127648/id_rsa Username:docker}
	I0819 21:10:37.870112 1351451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/old-k8s-version-127648/id_rsa Username:docker}
	I0819 21:10:38.110943 1351451 ssh_runner.go:195] Run: systemctl --version
	I0819 21:10:38.116635 1351451 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 21:10:38.122872 1351451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0819 21:10:38.154989 1351451 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0819 21:10:38.155141 1351451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 21:10:38.166967 1351451 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 21:10:38.167039 1351451 start.go:495] detecting cgroup driver to use...
	I0819 21:10:38.167089 1351451 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 21:10:38.167164 1351451 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 21:10:38.189488 1351451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 21:10:38.206623 1351451 docker.go:217] disabling cri-docker service (if available) ...
	I0819 21:10:38.206720 1351451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 21:10:38.223154 1351451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 21:10:38.243426 1351451 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 21:10:38.387206 1351451 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 21:10:38.494621 1351451 docker.go:233] disabling docker service ...
	I0819 21:10:38.494743 1351451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 21:10:38.509929 1351451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 21:10:38.522540 1351451 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 21:10:38.634381 1351451 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 21:10:38.745232 1351451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 21:10:38.758961 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 21:10:38.793969 1351451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0819 21:10:38.807848 1351451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 21:10:38.821714 1351451 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 21:10:38.821830 1351451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 21:10:38.833678 1351451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 21:10:38.852884 1351451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 21:10:38.866232 1351451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 21:10:38.879603 1351451 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 21:10:38.891592 1351451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 21:10:38.903062 1351451 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 21:10:38.916968 1351451 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 21:10:38.932376 1351451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 21:10:39.033828 1351451 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 21:10:39.238716 1351451 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0819 21:10:39.238819 1351451 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0819 21:10:39.246054 1351451 start.go:563] Will wait 60s for crictl version
	I0819 21:10:39.246179 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:10:39.253073 1351451 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 21:10:39.314092 1351451 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0819 21:10:39.314219 1351451 ssh_runner.go:195] Run: containerd --version
	I0819 21:10:39.343113 1351451 ssh_runner.go:195] Run: containerd --version
	I0819 21:10:39.370994 1351451 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	I0819 21:10:39.373700 1351451 cli_runner.go:164] Run: docker network inspect old-k8s-version-127648 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 21:10:39.390185 1351451 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0819 21:10:39.393898 1351451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 21:10:39.408364 1351451 kubeadm.go:883] updating cluster {Name:old-k8s-version-127648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-127648 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 21:10:39.408486 1351451 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 21:10:39.408545 1351451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 21:10:39.463488 1351451 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 21:10:39.463513 1351451 containerd.go:534] Images already preloaded, skipping extraction
	I0819 21:10:39.463574 1351451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 21:10:39.520443 1351451 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 21:10:39.520505 1351451 cache_images.go:84] Images are preloaded, skipping loading
	I0819 21:10:39.520534 1351451 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0819 21:10:39.520695 1351451 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-127648 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-127648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 21:10:39.520811 1351451 ssh_runner.go:195] Run: sudo crictl info
	I0819 21:10:39.574368 1351451 cni.go:84] Creating CNI manager for ""
	I0819 21:10:39.574388 1351451 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 21:10:39.574397 1351451 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 21:10:39.574419 1351451 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-127648 NodeName:old-k8s-version-127648 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 21:10:39.574551 1351451 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-127648"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 21:10:39.574614 1351451 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 21:10:39.584608 1351451 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 21:10:39.584726 1351451 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 21:10:39.593988 1351451 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0819 21:10:39.613323 1351451 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 21:10:39.632628 1351451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0819 21:10:39.653356 1351451 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0819 21:10:39.657355 1351451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 21:10:39.671698 1351451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 21:10:39.815919 1351451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 21:10:39.835036 1351451 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648 for IP: 192.168.85.2
	I0819 21:10:39.835114 1351451 certs.go:194] generating shared ca certs ...
	I0819 21:10:39.835147 1351451 certs.go:226] acquiring lock for ca certs: {Name:mk862c79d80b8fe3a5df83b1592928b3403a862f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 21:10:39.835341 1351451 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.key
	I0819 21:10:39.835426 1351451 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/proxy-client-ca.key
	I0819 21:10:39.835454 1351451 certs.go:256] generating profile certs ...
	I0819 21:10:39.835585 1351451 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.key
	I0819 21:10:39.835708 1351451 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/apiserver.key.a0042f6e
	I0819 21:10:39.835786 1351451 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/proxy-client.key
	I0819 21:10:39.835946 1351451 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/1145018.pem (1338 bytes)
	W0819 21:10:39.836018 1351451 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/1145018_empty.pem, impossibly tiny 0 bytes
	I0819 21:10:39.836062 1351451 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 21:10:39.836121 1351451 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem (1078 bytes)
	I0819 21:10:39.836176 1351451 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/cert.pem (1123 bytes)
	I0819 21:10:39.836247 1351451 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/key.pem (1675 bytes)
	I0819 21:10:39.836345 1351451 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/files/etc/ssl/certs/11450182.pem (1708 bytes)
	I0819 21:10:39.837405 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 21:10:39.892294 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 21:10:39.953768 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 21:10:40.040289 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 21:10:40.090732 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 21:10:40.135928 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 21:10:40.179957 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 21:10:40.211488 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 21:10:40.244062 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 21:10:40.273823 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/1145018.pem --> /usr/share/ca-certificates/1145018.pem (1338 bytes)
	I0819 21:10:40.315960 1351451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/files/etc/ssl/certs/11450182.pem --> /usr/share/ca-certificates/11450182.pem (1708 bytes)
	I0819 21:10:40.355511 1351451 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 21:10:40.383715 1351451 ssh_runner.go:195] Run: openssl version
	I0819 21:10:40.391434 1351451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 21:10:40.406871 1351451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 21:10:40.411089 1351451 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I0819 21:10:40.411161 1351451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 21:10:40.425164 1351451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 21:10:40.443259 1351451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1145018.pem && ln -fs /usr/share/ca-certificates/1145018.pem /etc/ssl/certs/1145018.pem"
	I0819 21:10:40.458205 1351451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1145018.pem
	I0819 21:10:40.462438 1351451 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 20:31 /usr/share/ca-certificates/1145018.pem
	I0819 21:10:40.462509 1351451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1145018.pem
	I0819 21:10:40.475897 1351451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1145018.pem /etc/ssl/certs/51391683.0"
	I0819 21:10:40.489413 1351451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11450182.pem && ln -fs /usr/share/ca-certificates/11450182.pem /etc/ssl/certs/11450182.pem"
	I0819 21:10:40.500287 1351451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11450182.pem
	I0819 21:10:40.504787 1351451 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 20:31 /usr/share/ca-certificates/11450182.pem
	I0819 21:10:40.504921 1351451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11450182.pem
	I0819 21:10:40.515288 1351451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11450182.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 21:10:40.528049 1351451 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 21:10:40.533915 1351451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 21:10:40.543963 1351451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 21:10:40.555447 1351451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 21:10:40.567696 1351451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 21:10:40.576305 1351451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 21:10:40.583874 1351451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 21:10:40.591444 1351451 kubeadm.go:392] StartCluster: {Name:old-k8s-version-127648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-127648 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 21:10:40.591630 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0819 21:10:40.591723 1351451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 21:10:40.651198 1351451 cri.go:89] found id: "f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657"
	I0819 21:10:40.651278 1351451 cri.go:89] found id: "aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e"
	I0819 21:10:40.651298 1351451 cri.go:89] found id: "f2320644894e9ea7972856fdbc0a5bc4af919db56252536190f012afe85043a2"
	I0819 21:10:40.651319 1351451 cri.go:89] found id: "63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e"
	I0819 21:10:40.651368 1351451 cri.go:89] found id: "de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5"
	I0819 21:10:40.651394 1351451 cri.go:89] found id: "ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce"
	I0819 21:10:40.651414 1351451 cri.go:89] found id: "62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a"
	I0819 21:10:40.651448 1351451 cri.go:89] found id: "7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525"
	I0819 21:10:40.651474 1351451 cri.go:89] found id: ""
	I0819 21:10:40.651560 1351451 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0819 21:10:40.670058 1351451 cri.go:116] JSON = null
	W0819 21:10:40.670159 1351451 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0819 21:10:40.670269 1351451 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 21:10:40.682793 1351451 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 21:10:40.682864 1351451 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 21:10:40.682946 1351451 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 21:10:40.696090 1351451 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 21:10:40.696661 1351451 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-127648" does not appear in /home/jenkins/minikube-integration/19423-1139612/kubeconfig
	I0819 21:10:40.696844 1351451 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-1139612/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-127648" cluster setting kubeconfig missing "old-k8s-version-127648" context setting]
	I0819 21:10:40.697198 1351451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/kubeconfig: {Name:mk04c9370af3a3baaacd607c194f214d66561798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 21:10:40.701501 1351451 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 21:10:40.711804 1351451 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0819 21:10:40.711887 1351451 kubeadm.go:597] duration metric: took 29.003271ms to restartPrimaryControlPlane
	I0819 21:10:40.711913 1351451 kubeadm.go:394] duration metric: took 120.477493ms to StartCluster
	I0819 21:10:40.711959 1351451 settings.go:142] acquiring lock: {Name:mk42a43a496b3883d027e9bc4cab1df0994edc4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 21:10:40.712058 1351451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-1139612/kubeconfig
	I0819 21:10:40.712788 1351451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/kubeconfig: {Name:mk04c9370af3a3baaacd607c194f214d66561798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 21:10:40.713056 1351451 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 21:10:40.713472 1351451 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 21:10:40.713553 1351451 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-127648"
	I0819 21:10:40.713575 1351451 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-127648"
	W0819 21:10:40.713581 1351451 addons.go:243] addon storage-provisioner should already be in state true
	I0819 21:10:40.713606 1351451 host.go:66] Checking if "old-k8s-version-127648" exists ...
	I0819 21:10:40.714068 1351451 cli_runner.go:164] Run: docker container inspect old-k8s-version-127648 --format={{.State.Status}}
	I0819 21:10:40.714462 1351451 config.go:182] Loaded profile config "old-k8s-version-127648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0819 21:10:40.714555 1351451 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-127648"
	I0819 21:10:40.714609 1351451 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-127648"
	I0819 21:10:40.714936 1351451 cli_runner.go:164] Run: docker container inspect old-k8s-version-127648 --format={{.State.Status}}
	I0819 21:10:40.717610 1351451 addons.go:69] Setting dashboard=true in profile "old-k8s-version-127648"
	I0819 21:10:40.717931 1351451 addons.go:234] Setting addon dashboard=true in "old-k8s-version-127648"
	W0819 21:10:40.717962 1351451 addons.go:243] addon dashboard should already be in state true
	I0819 21:10:40.718131 1351451 host.go:66] Checking if "old-k8s-version-127648" exists ...
	I0819 21:10:40.717812 1351451 out.go:177] * Verifying Kubernetes components...
	I0819 21:10:40.717863 1351451 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-127648"
	I0819 21:10:40.719317 1351451 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-127648"
	W0819 21:10:40.719624 1351451 addons.go:243] addon metrics-server should already be in state true
	I0819 21:10:40.721122 1351451 host.go:66] Checking if "old-k8s-version-127648" exists ...
	I0819 21:10:40.721739 1351451 cli_runner.go:164] Run: docker container inspect old-k8s-version-127648 --format={{.State.Status}}
	I0819 21:10:40.722461 1351451 cli_runner.go:164] Run: docker container inspect old-k8s-version-127648 --format={{.State.Status}}
	I0819 21:10:40.723031 1351451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 21:10:40.785281 1351451 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-127648"
	W0819 21:10:40.785304 1351451 addons.go:243] addon default-storageclass should already be in state true
	I0819 21:10:40.785329 1351451 host.go:66] Checking if "old-k8s-version-127648" exists ...
	I0819 21:10:40.785758 1351451 cli_runner.go:164] Run: docker container inspect old-k8s-version-127648 --format={{.State.Status}}
	I0819 21:10:40.798057 1351451 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 21:10:40.800865 1351451 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 21:10:40.800889 1351451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 21:10:40.800952 1351451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-127648
	I0819 21:10:40.808055 1351451 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0819 21:10:40.810777 1351451 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0819 21:10:40.817960 1351451 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0819 21:10:40.817987 1351451 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0819 21:10:40.818055 1351451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-127648
	I0819 21:10:40.837240 1351451 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 21:10:40.840151 1351451 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 21:10:40.840177 1351451 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 21:10:40.840359 1351451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-127648
	I0819 21:10:40.846849 1351451 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 21:10:40.846880 1351451 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 21:10:40.846940 1351451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-127648
	I0819 21:10:40.883627 1351451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/old-k8s-version-127648/id_rsa Username:docker}
	I0819 21:10:40.885214 1351451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/old-k8s-version-127648/id_rsa Username:docker}
	I0819 21:10:40.909690 1351451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/old-k8s-version-127648/id_rsa Username:docker}
	I0819 21:10:40.917931 1351451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/old-k8s-version-127648/id_rsa Username:docker}
	I0819 21:10:40.970596 1351451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 21:10:41.006366 1351451 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-127648" to be "Ready" ...
	I0819 21:10:41.084525 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 21:10:41.141217 1351451 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0819 21:10:41.141291 1351451 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0819 21:10:41.147672 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 21:10:41.166212 1351451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 21:10:41.166237 1351451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 21:10:41.206731 1351451 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0819 21:10:41.206799 1351451 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0819 21:10:41.249074 1351451 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0819 21:10:41.249144 1351451 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0819 21:10:41.324056 1351451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 21:10:41.324131 1351451 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 21:10:41.360967 1351451 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0819 21:10:41.361039 1351451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0819 21:10:41.436084 1351451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 21:10:41.436163 1351451 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 21:10:41.442248 1351451 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0819 21:10:41.442325 1351451 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0819 21:10:41.485555 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:41.485654 1351451 retry.go:31] will retry after 196.377202ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 21:10:41.492338 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:41.492423 1351451 retry.go:31] will retry after 140.06513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:41.510747 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 21:10:41.512080 1351451 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0819 21:10:41.512105 1351451 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0819 21:10:41.553674 1351451 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0819 21:10:41.553708 1351451 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0819 21:10:41.577545 1351451 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0819 21:10:41.577570 1351451 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0819 21:10:41.632700 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0819 21:10:41.646113 1351451 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 21:10:41.646140 1351451 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0819 21:10:41.677708 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 21:10:41.682979 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0819 21:10:41.690846 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:41.690883 1351451 retry.go:31] will retry after 179.399959ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 21:10:41.807994 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:41.808061 1351451 retry.go:31] will retry after 419.862184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 21:10:41.852551 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 21:10:41.852581 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:41.852607 1351451 retry.go:31] will retry after 540.297667ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:41.852583 1351451 retry.go:31] will retry after 236.25235ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:41.870834 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 21:10:41.949175 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:41.949210 1351451 retry.go:31] will retry after 330.427572ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:42.089070 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 21:10:42.177802 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:42.177848 1351451 retry.go:31] will retry after 349.10568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:42.228137 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0819 21:10:42.280598 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 21:10:42.321170 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:42.321255 1351451 retry.go:31] will retry after 488.893693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 21:10:42.363796 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:42.363828 1351451 retry.go:31] will retry after 718.005645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:42.394019 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0819 21:10:42.469216 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:42.469250 1351451 retry.go:31] will retry after 376.503836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:42.528105 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 21:10:42.636715 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:42.636745 1351451 retry.go:31] will retry after 437.536352ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:42.811118 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0819 21:10:42.846575 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0819 21:10:42.906725 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:42.906759 1351451 retry.go:31] will retry after 563.675609ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 21:10:42.937866 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:42.937902 1351451 retry.go:31] will retry after 998.439955ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:43.008176 1351451 node_ready.go:53] error getting node "old-k8s-version-127648": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-127648": dial tcp 192.168.85.2:8443: connect: connection refused
	I0819 21:10:43.075421 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 21:10:43.082971 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 21:10:43.197804 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:43.197851 1351451 retry.go:31] will retry after 793.067347ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 21:10:43.223830 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:43.223867 1351451 retry.go:31] will retry after 612.714323ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:43.471368 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0819 21:10:43.557249 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:43.557334 1351451 retry.go:31] will retry after 942.13527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:43.837685 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 21:10:43.923461 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:43.923494 1351451 retry.go:31] will retry after 1.65024491s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:43.936741 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 21:10:43.992135 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 21:10:44.086590 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:44.086629 1351451 retry.go:31] will retry after 1.332632345s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 21:10:44.138164 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:44.138201 1351451 retry.go:31] will retry after 962.769833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:44.499780 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0819 21:10:44.623057 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:44.623136 1351451 retry.go:31] will retry after 2.287201949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:45.101319 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 21:10:45.284996 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:45.285036 1351451 retry.go:31] will retry after 1.084324381s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:45.420408 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 21:10:45.506901 1351451 node_ready.go:53] error getting node "old-k8s-version-127648": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-127648": dial tcp 192.168.85.2:8443: connect: connection refused
	I0819 21:10:45.574229 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 21:10:45.588900 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:45.588935 1351451 retry.go:31] will retry after 1.629647865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 21:10:45.694482 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:45.694524 1351451 retry.go:31] will retry after 1.690652631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:46.369521 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 21:10:46.442499 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:46.442542 1351451 retry.go:31] will retry after 2.244735002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:46.910581 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0819 21:10:46.991848 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:46.991885 1351451 retry.go:31] will retry after 2.069015847s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:47.218839 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0819 21:10:47.291199 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:47.291242 1351451 retry.go:31] will retry after 3.172213501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:47.385363 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 21:10:47.497528 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:47.497561 1351451 retry.go:31] will retry after 3.672476856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:47.507017 1351451 node_ready.go:53] error getting node "old-k8s-version-127648": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-127648": dial tcp 192.168.85.2:8443: connect: connection refused
	I0819 21:10:48.688197 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 21:10:48.807898 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:48.807938 1351451 retry.go:31] will retry after 3.643169621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:49.062114 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0819 21:10:49.260888 1351451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:49.260917 1351451 retry.go:31] will retry after 5.236532051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 21:10:49.507425 1351451 node_ready.go:53] error getting node "old-k8s-version-127648": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-127648": dial tcp 192.168.85.2:8443: connect: connection refused
	I0819 21:10:50.463920 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 21:10:51.170741 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 21:10:52.452269 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 21:10:54.497786 1351451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0819 21:10:57.747314 1351451 node_ready.go:49] node "old-k8s-version-127648" has status "Ready":"True"
	I0819 21:10:57.747340 1351451 node_ready.go:38] duration metric: took 16.740876149s for node "old-k8s-version-127648" to be "Ready" ...
	I0819 21:10:57.747350 1351451 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 21:10:58.022764 1351451 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-fj4wf" in "kube-system" namespace to be "Ready" ...
	I0819 21:10:58.083204 1351451 pod_ready.go:93] pod "coredns-74ff55c5b-fj4wf" in "kube-system" namespace has status "Ready":"True"
	I0819 21:10:58.083228 1351451 pod_ready.go:82] duration metric: took 60.342814ms for pod "coredns-74ff55c5b-fj4wf" in "kube-system" namespace to be "Ready" ...
	I0819 21:10:58.083241 1351451 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-127648" in "kube-system" namespace to be "Ready" ...
	I0819 21:10:58.139204 1351451 pod_ready.go:93] pod "etcd-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"True"
	I0819 21:10:58.139284 1351451 pod_ready.go:82] duration metric: took 56.034265ms for pod "etcd-old-k8s-version-127648" in "kube-system" namespace to be "Ready" ...
	I0819 21:10:58.139315 1351451 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-127648" in "kube-system" namespace to be "Ready" ...
	I0819 21:10:59.128838 1351451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.664870881s)
	I0819 21:10:59.128999 1351451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.958209998s)
	I0819 21:10:59.129032 1351451 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-127648"
	I0819 21:10:59.256561 1351451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.804241421s)
	I0819 21:10:59.256847 1351451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.759030983s)
	I0819 21:10:59.260790 1351451 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-127648 addons enable metrics-server
	
	I0819 21:10:59.265635 1351451 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0819 21:10:59.268363 1351451 addons.go:510] duration metric: took 18.554884277s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0819 21:11:00.155163 1351451 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:02.645316 1351451 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:04.645953 1351451 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:05.646762 1351451 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"True"
	I0819 21:11:05.646789 1351451 pod_ready.go:82] duration metric: took 7.507452867s for pod "kube-apiserver-old-k8s-version-127648" in "kube-system" namespace to be "Ready" ...
	I0819 21:11:05.646803 1351451 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace to be "Ready" ...
	I0819 21:11:07.653842 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:10.154524 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:12.653666 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:15.171378 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:17.654744 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:19.674952 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:22.154249 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:24.172829 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:26.653689 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:29.153348 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:31.154640 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:33.154697 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:35.161624 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:37.652533 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:39.653480 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:41.653570 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:43.655326 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:45.655565 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:48.155245 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:50.654345 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:52.655015 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:55.153998 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:57.183466 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:11:59.653846 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:02.156580 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:04.653671 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:07.160193 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:09.653525 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:11.655775 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:14.153884 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:16.653892 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:18.657952 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:21.154059 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:23.653805 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:25.654002 1351451 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:26.653004 1351451 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"True"
	I0819 21:12:26.653029 1351451 pod_ready.go:82] duration metric: took 1m21.006217827s for pod "kube-controller-manager-old-k8s-version-127648" in "kube-system" namespace to be "Ready" ...
	I0819 21:12:26.653042 1351451 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l9jdt" in "kube-system" namespace to be "Ready" ...
	I0819 21:12:26.658652 1351451 pod_ready.go:93] pod "kube-proxy-l9jdt" in "kube-system" namespace has status "Ready":"True"
	I0819 21:12:26.658677 1351451 pod_ready.go:82] duration metric: took 5.603221ms for pod "kube-proxy-l9jdt" in "kube-system" namespace to be "Ready" ...
	I0819 21:12:26.658688 1351451 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-127648" in "kube-system" namespace to be "Ready" ...
	I0819 21:12:26.668147 1351451 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-127648" in "kube-system" namespace has status "Ready":"True"
	I0819 21:12:26.668172 1351451 pod_ready.go:82] duration metric: took 9.475906ms for pod "kube-scheduler-old-k8s-version-127648" in "kube-system" namespace to be "Ready" ...
	I0819 21:12:26.668185 1351451 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace to be "Ready" ...
	I0819 21:12:28.673562 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:31.174497 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:33.174684 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:35.675171 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:38.191116 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:40.674520 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:42.675355 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:45.175341 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:47.674116 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:49.675079 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:52.174906 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:54.177250 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:56.177355 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:12:58.676661 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:01.175391 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:03.674960 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:06.175294 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:08.675339 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:11.174392 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:13.174622 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:15.675666 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:18.174919 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:20.175271 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:22.674917 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:25.173768 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:27.673440 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:29.674244 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:31.674376 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:33.674890 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:36.174897 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:38.175117 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:40.175705 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:42.181717 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:44.674515 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:46.675863 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:49.174704 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:51.175110 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:53.674489 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:55.674991 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:13:57.675430 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:00.231655 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:02.674756 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:05.175239 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:07.676805 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:10.174401 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:12.175301 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:14.674778 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:17.176545 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:19.674152 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:22.175115 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:24.176959 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:26.673821 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:28.678129 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:31.174283 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:33.174491 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:35.175725 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:37.674268 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:40.174741 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:42.175401 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:44.176832 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:46.177396 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:48.682776 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:51.175706 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:53.175937 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:55.673783 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:14:57.674811 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:00.196785 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:02.674349 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:05.173705 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:07.175338 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:09.176722 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:11.674821 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:14.175635 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:16.674816 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:18.675309 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:21.175280 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:23.676677 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:26.178446 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:28.675193 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:31.175846 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:33.678605 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:36.173782 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:38.174728 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:40.675597 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:43.175235 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:45.175886 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:47.674048 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:49.674457 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:51.675075 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:53.676251 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:56.175158 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:15:58.176637 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:00.199660 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:02.685086 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:05.175119 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:07.674770 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:10.175067 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:12.674503 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:15.176484 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:17.674418 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:19.683024 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:22.175433 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:24.181379 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:26.675944 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:26.675980 1351451 pod_ready.go:82] duration metric: took 4m0.007787366s for pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace to be "Ready" ...
	E0819 21:16:26.675991 1351451 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 21:16:26.676030 1351451 pod_ready.go:39] duration metric: took 5m28.928635211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 21:16:26.676055 1351451 api_server.go:52] waiting for apiserver process to appear ...
	I0819 21:16:26.676101 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 21:16:26.676201 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 21:16:26.722017 1351451 cri.go:89] found id: "863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e"
	I0819 21:16:26.722043 1351451 cri.go:89] found id: "ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce"
	I0819 21:16:26.722049 1351451 cri.go:89] found id: ""
	I0819 21:16:26.722056 1351451 logs.go:276] 2 containers: [863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce]
	I0819 21:16:26.722117 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.725648 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.729023 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 21:16:26.729144 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 21:16:26.770425 1351451 cri.go:89] found id: "fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267"
	I0819 21:16:26.770446 1351451 cri.go:89] found id: "de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5"
	I0819 21:16:26.770451 1351451 cri.go:89] found id: ""
	I0819 21:16:26.770458 1351451 logs.go:276] 2 containers: [fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267 de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5]
	I0819 21:16:26.770516 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.774330 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.777988 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 21:16:26.778056 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 21:16:26.831135 1351451 cri.go:89] found id: "1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e"
	I0819 21:16:26.831205 1351451 cri.go:89] found id: "f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657"
	I0819 21:16:26.831224 1351451 cri.go:89] found id: ""
	I0819 21:16:26.831251 1351451 logs.go:276] 2 containers: [1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657]
	I0819 21:16:26.831331 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.835121 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.838706 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 21:16:26.838823 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 21:16:26.885971 1351451 cri.go:89] found id: "c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4"
	I0819 21:16:26.886020 1351451 cri.go:89] found id: "62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a"
	I0819 21:16:26.886043 1351451 cri.go:89] found id: ""
	I0819 21:16:26.886073 1351451 logs.go:276] 2 containers: [c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4 62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a]
	I0819 21:16:26.886149 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.890432 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.894051 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 21:16:26.894167 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 21:16:26.940512 1351451 cri.go:89] found id: "3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a"
	I0819 21:16:26.940585 1351451 cri.go:89] found id: "63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e"
	I0819 21:16:26.940605 1351451 cri.go:89] found id: ""
	I0819 21:16:26.940633 1351451 logs.go:276] 2 containers: [3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a 63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e]
	I0819 21:16:26.940708 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.944532 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.947982 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 21:16:26.948103 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 21:16:27.064038 1351451 cri.go:89] found id: "a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363"
	I0819 21:16:27.064065 1351451 cri.go:89] found id: "7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525"
	I0819 21:16:27.064071 1351451 cri.go:89] found id: ""
	I0819 21:16:27.064079 1351451 logs.go:276] 2 containers: [a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363 7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525]
	I0819 21:16:27.064160 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:27.075192 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:27.084336 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 21:16:27.084446 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 21:16:27.209506 1351451 cri.go:89] found id: "2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20"
	I0819 21:16:27.209543 1351451 cri.go:89] found id: "aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e"
	I0819 21:16:27.209548 1351451 cri.go:89] found id: ""
	I0819 21:16:27.209556 1351451 logs.go:276] 2 containers: [2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20 aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e]
	I0819 21:16:27.209653 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:27.214678 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:27.218252 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 21:16:27.218352 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 21:16:27.270015 1351451 cri.go:89] found id: "18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1"
	I0819 21:16:27.270059 1351451 cri.go:89] found id: ""
	I0819 21:16:27.270067 1351451 logs.go:276] 1 containers: [18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1]
	I0819 21:16:27.270165 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:27.274425 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 21:16:27.274551 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 21:16:27.324064 1351451 cri.go:89] found id: "3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e"
	I0819 21:16:27.324100 1351451 cri.go:89] found id: "556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd"
	I0819 21:16:27.324105 1351451 cri.go:89] found id: ""
	I0819 21:16:27.324112 1351451 logs.go:276] 2 containers: [3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e 556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd]
	I0819 21:16:27.324194 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:27.351328 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:27.355700 1351451 logs.go:123] Gathering logs for coredns [f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657] ...
	I0819 21:16:27.355736 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657"
	I0819 21:16:27.406666 1351451 logs.go:123] Gathering logs for kube-scheduler [62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a] ...
	I0819 21:16:27.406696 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a"
	I0819 21:16:27.461106 1351451 logs.go:123] Gathering logs for kube-proxy [3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a] ...
	I0819 21:16:27.461139 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a"
	I0819 21:16:27.513081 1351451 logs.go:123] Gathering logs for storage-provisioner [556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd] ...
	I0819 21:16:27.513107 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd"
	I0819 21:16:27.570112 1351451 logs.go:123] Gathering logs for kubelet ...
	I0819 21:16:27.570155 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 21:16:27.636753 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.621751     665 reflector.go:138] object-"kube-system"/"kindnet-token-xptkw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xptkw" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:27.637011 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622026     665 reflector.go:138] object-"kube-system"/"coredns-token-jj86v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-jj86v" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:27.637232 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622171     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:27.637539 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622316     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-ssctg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-ssctg" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:27.637752 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622453     665 reflector.go:138] object-"default"/"default-token-vbtl7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-vbtl7" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:27.637956 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.671400     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:27.638181 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.671660     665 reflector.go:138] object-"kube-system"/"metrics-server-token-x764s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-x764s" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:27.646452 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:00 old-k8s-version-127648 kubelet[665]: E0819 21:11:00.574230     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:27.646679 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:00 old-k8s-version-127648 kubelet[665]: E0819 21:11:00.789676     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.650451 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:15 old-k8s-version-127648 kubelet[665]: E0819 21:11:15.561064     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:27.652683 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:22 old-k8s-version-127648 kubelet[665]: E0819 21:11:22.882357     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.653041 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:23 old-k8s-version-127648 kubelet[665]: E0819 21:11:23.880032     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.653248 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:27 old-k8s-version-127648 kubelet[665]: E0819 21:11:27.549335     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.654048 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:30 old-k8s-version-127648 kubelet[665]: E0819 21:11:30.903604     665 pod_workers.go:191] Error syncing pod 74e4d116-4e4e-4dc5-af07-3013282e840a ("storage-provisioner_kube-system(74e4d116-4e4e-4dc5-af07-3013282e840a)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(74e4d116-4e4e-4dc5-af07-3013282e840a)"
	W0819 21:16:27.654403 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:31 old-k8s-version-127648 kubelet[665]: E0819 21:11:31.736669     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.657331 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:41 old-k8s-version-127648 kubelet[665]: E0819 21:11:41.559000     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:27.658003 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:45 old-k8s-version-127648 kubelet[665]: E0819 21:11:45.947294     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.658476 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:51 old-k8s-version-127648 kubelet[665]: E0819 21:11:51.747256     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.658664 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:54 old-k8s-version-127648 kubelet[665]: E0819 21:11:54.549415     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.658852 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:06 old-k8s-version-127648 kubelet[665]: E0819 21:12:06.549632     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.659552 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:08 old-k8s-version-127648 kubelet[665]: E0819 21:12:08.101007     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.659888 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:11 old-k8s-version-127648 kubelet[665]: E0819 21:12:11.736495     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.660109 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:18 old-k8s-version-127648 kubelet[665]: E0819 21:12:18.550570     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.660468 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:22 old-k8s-version-127648 kubelet[665]: E0819 21:12:22.548984     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.663052 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:29 old-k8s-version-127648 kubelet[665]: E0819 21:12:29.557879     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:27.663411 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:35 old-k8s-version-127648 kubelet[665]: E0819 21:12:35.549004     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.663625 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:41 old-k8s-version-127648 kubelet[665]: E0819 21:12:41.549476     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.663977 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:47 old-k8s-version-127648 kubelet[665]: E0819 21:12:47.548954     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.664192 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:56 old-k8s-version-127648 kubelet[665]: E0819 21:12:56.550012     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.664817 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:59 old-k8s-version-127648 kubelet[665]: E0819 21:12:59.243984     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.665194 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:01 old-k8s-version-127648 kubelet[665]: E0819 21:13:01.736702     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.665402 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:08 old-k8s-version-127648 kubelet[665]: E0819 21:13:08.550071     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.665753 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:15 old-k8s-version-127648 kubelet[665]: E0819 21:13:15.549079     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.666078 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:21 old-k8s-version-127648 kubelet[665]: E0819 21:13:21.549597     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.666481 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:27 old-k8s-version-127648 kubelet[665]: E0819 21:13:27.549537     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.666691 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:36 old-k8s-version-127648 kubelet[665]: E0819 21:13:36.549526     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.667050 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:39 old-k8s-version-127648 kubelet[665]: E0819 21:13:39.549198     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.669612 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:50 old-k8s-version-127648 kubelet[665]: E0819 21:13:50.557660     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:27.670005 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:52 old-k8s-version-127648 kubelet[665]: E0819 21:13:52.549116     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.670216 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:01 old-k8s-version-127648 kubelet[665]: E0819 21:14:01.549487     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.670581 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:06 old-k8s-version-127648 kubelet[665]: E0819 21:14:06.548964     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.670789 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:16 old-k8s-version-127648 kubelet[665]: E0819 21:14:16.553951     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.671457 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:20 old-k8s-version-127648 kubelet[665]: E0819 21:14:20.469865     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.671816 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:21 old-k8s-version-127648 kubelet[665]: E0819 21:14:21.736453     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.672060 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:31 old-k8s-version-127648 kubelet[665]: E0819 21:14:31.549363     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.672464 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:36 old-k8s-version-127648 kubelet[665]: E0819 21:14:36.549968     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.672679 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:46 old-k8s-version-127648 kubelet[665]: E0819 21:14:46.549614     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.673041 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:48 old-k8s-version-127648 kubelet[665]: E0819 21:14:48.551815     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.673268 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:57 old-k8s-version-127648 kubelet[665]: E0819 21:14:57.549314     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.673723 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:03 old-k8s-version-127648 kubelet[665]: E0819 21:15:03.548948     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.673940 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:08 old-k8s-version-127648 kubelet[665]: E0819 21:15:08.550121     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.674342 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:18 old-k8s-version-127648 kubelet[665]: E0819 21:15:18.549558     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.674597 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:19 old-k8s-version-127648 kubelet[665]: E0819 21:15:19.549582     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.675013 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:31 old-k8s-version-127648 kubelet[665]: E0819 21:15:31.549167     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.675241 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:34 old-k8s-version-127648 kubelet[665]: E0819 21:15:34.549642     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.675558 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:46 old-k8s-version-127648 kubelet[665]: E0819 21:15:46.549479     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.675963 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:46 old-k8s-version-127648 kubelet[665]: E0819 21:15:46.551366     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.676343 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:59 old-k8s-version-127648 kubelet[665]: E0819 21:15:59.549085     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.676553 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:00 old-k8s-version-127648 kubelet[665]: E0819 21:16:00.554355     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.676780 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.549495     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.677133 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.550809     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	I0819 21:16:27.677148 1351451 logs.go:123] Gathering logs for kube-apiserver [863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e] ...
	I0819 21:16:27.677163 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e"
	I0819 21:16:27.783187 1351451 logs.go:123] Gathering logs for kube-apiserver [ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce] ...
	I0819 21:16:27.783234 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce"
	I0819 21:16:27.878025 1351451 logs.go:123] Gathering logs for etcd [fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267] ...
	I0819 21:16:27.878059 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267"
	I0819 21:16:27.971482 1351451 logs.go:123] Gathering logs for kubernetes-dashboard [18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1] ...
	I0819 21:16:27.971509 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1"
	I0819 21:16:28.026406 1351451 logs.go:123] Gathering logs for container status ...
	I0819 21:16:28.026441 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 21:16:28.077604 1351451 logs.go:123] Gathering logs for dmesg ...
	I0819 21:16:28.077635 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 21:16:28.106351 1351451 logs.go:123] Gathering logs for describe nodes ...
	I0819 21:16:28.106383 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 21:16:28.314793 1351451 logs.go:123] Gathering logs for etcd [de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5] ...
	I0819 21:16:28.314818 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5"
	I0819 21:16:28.388732 1351451 logs.go:123] Gathering logs for kindnet [aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e] ...
	I0819 21:16:28.388809 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e"
	I0819 21:16:28.461767 1351451 logs.go:123] Gathering logs for kube-controller-manager [7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525] ...
	I0819 21:16:28.461855 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525"
	I0819 21:16:28.551587 1351451 logs.go:123] Gathering logs for kindnet [2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20] ...
	I0819 21:16:28.551688 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20"
	I0819 21:16:28.653556 1351451 logs.go:123] Gathering logs for storage-provisioner [3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e] ...
	I0819 21:16:28.653593 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e"
	I0819 21:16:28.695383 1351451 logs.go:123] Gathering logs for containerd ...
	I0819 21:16:28.695411 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 21:16:28.762252 1351451 logs.go:123] Gathering logs for coredns [1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e] ...
	I0819 21:16:28.762289 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e"
	I0819 21:16:28.820470 1351451 logs.go:123] Gathering logs for kube-scheduler [c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4] ...
	I0819 21:16:28.820516 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4"
	I0819 21:16:28.873663 1351451 logs.go:123] Gathering logs for kube-proxy [63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e] ...
	I0819 21:16:28.873701 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e"
	I0819 21:16:28.917142 1351451 logs.go:123] Gathering logs for kube-controller-manager [a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363] ...
	I0819 21:16:28.917183 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363"
	I0819 21:16:28.979164 1351451 out.go:358] Setting ErrFile to fd 2...
	I0819 21:16:28.979196 1351451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 21:16:28.979271 1351451 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 21:16:28.979288 1351451 out.go:270]   Aug 19 21:15:46 old-k8s-version-127648 kubelet[665]: E0819 21:15:46.551366     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	  Aug 19 21:15:46 old-k8s-version-127648 kubelet[665]: E0819 21:15:46.551366     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:28.979296 1351451 out.go:270]   Aug 19 21:15:59 old-k8s-version-127648 kubelet[665]: E0819 21:15:59.549085     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	  Aug 19 21:15:59 old-k8s-version-127648 kubelet[665]: E0819 21:15:59.549085     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:28.979318 1351451 out.go:270]   Aug 19 21:16:00 old-k8s-version-127648 kubelet[665]: E0819 21:16:00.554355     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 19 21:16:00 old-k8s-version-127648 kubelet[665]: E0819 21:16:00.554355     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:28.979328 1351451 out.go:270]   Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.549495     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.549495     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:28.979333 1351451 out.go:270]   Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.550809     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	  Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.550809     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	I0819 21:16:28.979346 1351451 out.go:358] Setting ErrFile to fd 2...
	I0819 21:16:28.979354 1351451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 21:16:38.980698 1351451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 21:16:39.000948 1351451 api_server.go:72] duration metric: took 5m58.287822906s to wait for apiserver process to appear ...
	I0819 21:16:39.000980 1351451 api_server.go:88] waiting for apiserver healthz status ...
	I0819 21:16:39.001038 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 21:16:39.001135 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 21:16:39.096959 1351451 cri.go:89] found id: "863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e"
	I0819 21:16:39.096986 1351451 cri.go:89] found id: "ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce"
	I0819 21:16:39.096998 1351451 cri.go:89] found id: ""
	I0819 21:16:39.097007 1351451 logs.go:276] 2 containers: [863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce]
	I0819 21:16:39.097082 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.103146 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.108342 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 21:16:39.108415 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 21:16:39.181901 1351451 cri.go:89] found id: "fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267"
	I0819 21:16:39.181922 1351451 cri.go:89] found id: "de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5"
	I0819 21:16:39.181931 1351451 cri.go:89] found id: ""
	I0819 21:16:39.181942 1351451 logs.go:276] 2 containers: [fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267 de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5]
	I0819 21:16:39.182019 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.187650 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.195541 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 21:16:39.195721 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 21:16:39.279675 1351451 cri.go:89] found id: "1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e"
	I0819 21:16:39.279764 1351451 cri.go:89] found id: "f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657"
	I0819 21:16:39.279788 1351451 cri.go:89] found id: ""
	I0819 21:16:39.279817 1351451 logs.go:276] 2 containers: [1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657]
	I0819 21:16:39.280034 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.285431 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.290629 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 21:16:39.290859 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 21:16:39.371561 1351451 cri.go:89] found id: "c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4"
	I0819 21:16:39.371650 1351451 cri.go:89] found id: "62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a"
	I0819 21:16:39.371676 1351451 cri.go:89] found id: ""
	I0819 21:16:39.371728 1351451 logs.go:276] 2 containers: [c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4 62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a]
	I0819 21:16:39.371851 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.376912 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.383459 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 21:16:39.383627 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 21:16:39.448903 1351451 cri.go:89] found id: "3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a"
	I0819 21:16:39.448982 1351451 cri.go:89] found id: "63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e"
	I0819 21:16:39.449012 1351451 cri.go:89] found id: ""
	I0819 21:16:39.449034 1351451 logs.go:276] 2 containers: [3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a 63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e]
	I0819 21:16:39.449149 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.454509 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.459726 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 21:16:39.459909 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 21:16:39.545859 1351451 cri.go:89] found id: "a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363"
	I0819 21:16:39.545936 1351451 cri.go:89] found id: "7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525"
	I0819 21:16:39.545961 1351451 cri.go:89] found id: ""
	I0819 21:16:39.545981 1351451 logs.go:276] 2 containers: [a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363 7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525]
	I0819 21:16:39.546083 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.561290 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.565853 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 21:16:39.566023 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 21:16:39.630565 1351451 cri.go:89] found id: "2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20"
	I0819 21:16:39.630648 1351451 cri.go:89] found id: "aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e"
	I0819 21:16:39.630667 1351451 cri.go:89] found id: ""
	I0819 21:16:39.630691 1351451 logs.go:276] 2 containers: [2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20 aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e]
	I0819 21:16:39.630799 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.636088 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.640406 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 21:16:39.640539 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 21:16:39.705002 1351451 cri.go:89] found id: "18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1"
	I0819 21:16:39.705082 1351451 cri.go:89] found id: ""
	I0819 21:16:39.705109 1351451 logs.go:276] 1 containers: [18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1]
	I0819 21:16:39.705209 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.713612 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 21:16:39.713752 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 21:16:39.771883 1351451 cri.go:89] found id: "3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e"
	I0819 21:16:39.771963 1351451 cri.go:89] found id: "556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd"
	I0819 21:16:39.771982 1351451 cri.go:89] found id: ""
	I0819 21:16:39.772007 1351451 logs.go:276] 2 containers: [3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e 556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd]
	I0819 21:16:39.772133 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.777471 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.782219 1351451 logs.go:123] Gathering logs for dmesg ...
	I0819 21:16:39.782306 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 21:16:39.803506 1351451 logs.go:123] Gathering logs for kube-apiserver [ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce] ...
	I0819 21:16:39.803589 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce"
	I0819 21:16:39.902538 1351451 logs.go:123] Gathering logs for kube-controller-manager [a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363] ...
	I0819 21:16:39.902629 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363"
	I0819 21:16:39.988902 1351451 logs.go:123] Gathering logs for storage-provisioner [556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd] ...
	I0819 21:16:39.988936 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd"
	I0819 21:16:40.051419 1351451 logs.go:123] Gathering logs for container status ...
	I0819 21:16:40.051502 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 21:16:40.130468 1351451 logs.go:123] Gathering logs for describe nodes ...
	I0819 21:16:40.130553 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 21:16:40.440578 1351451 logs.go:123] Gathering logs for kube-apiserver [863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e] ...
	I0819 21:16:40.440661 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e"
	I0819 21:16:40.552968 1351451 logs.go:123] Gathering logs for etcd [de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5] ...
	I0819 21:16:40.553065 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5"
	I0819 21:16:40.613652 1351451 logs.go:123] Gathering logs for coredns [1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e] ...
	I0819 21:16:40.613737 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e"
	I0819 21:16:40.704428 1351451 logs.go:123] Gathering logs for coredns [f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657] ...
	I0819 21:16:40.704502 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657"
	I0819 21:16:40.762919 1351451 logs.go:123] Gathering logs for storage-provisioner [3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e] ...
	I0819 21:16:40.762995 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e"
	I0819 21:16:40.818047 1351451 logs.go:123] Gathering logs for kubernetes-dashboard [18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1] ...
	I0819 21:16:40.818117 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1"
	I0819 21:16:40.872667 1351451 logs.go:123] Gathering logs for kube-scheduler [62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a] ...
	I0819 21:16:40.872739 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a"
	I0819 21:16:40.943174 1351451 logs.go:123] Gathering logs for kube-proxy [3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a] ...
	I0819 21:16:40.943243 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a"
	I0819 21:16:41.028487 1351451 logs.go:123] Gathering logs for kube-proxy [63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e] ...
	I0819 21:16:41.028563 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e"
	I0819 21:16:41.099036 1351451 logs.go:123] Gathering logs for kube-controller-manager [7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525] ...
	I0819 21:16:41.099118 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525"
	I0819 21:16:41.217328 1351451 logs.go:123] Gathering logs for kindnet [2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20] ...
	I0819 21:16:41.217417 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20"
	I0819 21:16:41.421061 1351451 logs.go:123] Gathering logs for kindnet [aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e] ...
	I0819 21:16:41.421169 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e"
	I0819 21:16:41.500448 1351451 logs.go:123] Gathering logs for kubelet ...
	I0819 21:16:41.500533 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 21:16:41.586682 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.621751     665 reflector.go:138] object-"kube-system"/"kindnet-token-xptkw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xptkw" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:41.586985 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622026     665 reflector.go:138] object-"kube-system"/"coredns-token-jj86v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-jj86v" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:41.587223 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622171     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:41.587487 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622316     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-ssctg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-ssctg" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:41.587731 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622453     665 reflector.go:138] object-"default"/"default-token-vbtl7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-vbtl7" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:41.587965 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.671400     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:41.588235 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.671660     665 reflector.go:138] object-"kube-system"/"metrics-server-token-x764s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-x764s" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:41.597320 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:00 old-k8s-version-127648 kubelet[665]: E0819 21:11:00.574230     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:41.597559 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:00 old-k8s-version-127648 kubelet[665]: E0819 21:11:00.789676     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.600560 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:15 old-k8s-version-127648 kubelet[665]: E0819 21:11:15.561064     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:41.602849 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:22 old-k8s-version-127648 kubelet[665]: E0819 21:11:22.882357     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.603229 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:23 old-k8s-version-127648 kubelet[665]: E0819 21:11:23.880032     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.603468 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:27 old-k8s-version-127648 kubelet[665]: E0819 21:11:27.549335     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.604309 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:30 old-k8s-version-127648 kubelet[665]: E0819 21:11:30.903604     665 pod_workers.go:191] Error syncing pod 74e4d116-4e4e-4dc5-af07-3013282e840a ("storage-provisioner_kube-system(74e4d116-4e4e-4dc5-af07-3013282e840a)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(74e4d116-4e4e-4dc5-af07-3013282e840a)"
	W0819 21:16:41.604680 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:31 old-k8s-version-127648 kubelet[665]: E0819 21:11:31.736669     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.607638 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:41 old-k8s-version-127648 kubelet[665]: E0819 21:11:41.559000     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:41.608385 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:45 old-k8s-version-127648 kubelet[665]: E0819 21:11:45.947294     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.608989 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:51 old-k8s-version-127648 kubelet[665]: E0819 21:11:51.747256     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.609218 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:54 old-k8s-version-127648 kubelet[665]: E0819 21:11:54.549415     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.609955 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:06 old-k8s-version-127648 kubelet[665]: E0819 21:12:06.549632     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.610632 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:08 old-k8s-version-127648 kubelet[665]: E0819 21:12:08.101007     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.611019 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:11 old-k8s-version-127648 kubelet[665]: E0819 21:12:11.736495     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.611240 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:18 old-k8s-version-127648 kubelet[665]: E0819 21:12:18.550570     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.611614 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:22 old-k8s-version-127648 kubelet[665]: E0819 21:12:22.548984     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.614201 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:29 old-k8s-version-127648 kubelet[665]: E0819 21:12:29.557879     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:41.614589 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:35 old-k8s-version-127648 kubelet[665]: E0819 21:12:35.549004     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.614826 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:41 old-k8s-version-127648 kubelet[665]: E0819 21:12:41.549476     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.615210 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:47 old-k8s-version-127648 kubelet[665]: E0819 21:12:47.548954     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.615441 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:56 old-k8s-version-127648 kubelet[665]: E0819 21:12:56.550012     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.616091 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:59 old-k8s-version-127648 kubelet[665]: E0819 21:12:59.243984     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.616472 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:01 old-k8s-version-127648 kubelet[665]: E0819 21:13:01.736702     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.616702 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:08 old-k8s-version-127648 kubelet[665]: E0819 21:13:08.550071     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.618906 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:15 old-k8s-version-127648 kubelet[665]: E0819 21:13:15.549079     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.619148 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:21 old-k8s-version-127648 kubelet[665]: E0819 21:13:21.549597     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.619523 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:27 old-k8s-version-127648 kubelet[665]: E0819 21:13:27.549537     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.619742 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:36 old-k8s-version-127648 kubelet[665]: E0819 21:13:36.549526     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.620148 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:39 old-k8s-version-127648 kubelet[665]: E0819 21:13:39.549198     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.622878 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:50 old-k8s-version-127648 kubelet[665]: E0819 21:13:50.557660     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:41.623251 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:52 old-k8s-version-127648 kubelet[665]: E0819 21:13:52.549116     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.623473 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:01 old-k8s-version-127648 kubelet[665]: E0819 21:14:01.549487     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.623831 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:06 old-k8s-version-127648 kubelet[665]: E0819 21:14:06.548964     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.624043 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:16 old-k8s-version-127648 kubelet[665]: E0819 21:14:16.553951     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.624685 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:20 old-k8s-version-127648 kubelet[665]: E0819 21:14:20.469865     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.625055 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:21 old-k8s-version-127648 kubelet[665]: E0819 21:14:21.736453     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.625267 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:31 old-k8s-version-127648 kubelet[665]: E0819 21:14:31.549363     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.625624 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:36 old-k8s-version-127648 kubelet[665]: E0819 21:14:36.549968     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.625840 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:46 old-k8s-version-127648 kubelet[665]: E0819 21:14:46.549614     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.626210 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:48 old-k8s-version-127648 kubelet[665]: E0819 21:14:48.551815     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.626434 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:57 old-k8s-version-127648 kubelet[665]: E0819 21:14:57.549314     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.626812 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:03 old-k8s-version-127648 kubelet[665]: E0819 21:15:03.548948     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.627047 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:08 old-k8s-version-127648 kubelet[665]: E0819 21:15:08.550121     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.627407 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:18 old-k8s-version-127648 kubelet[665]: E0819 21:15:18.549558     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.627629 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:19 old-k8s-version-127648 kubelet[665]: E0819 21:15:19.549582     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.627992 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:31 old-k8s-version-127648 kubelet[665]: E0819 21:15:31.549167     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.628208 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:34 old-k8s-version-127648 kubelet[665]: E0819 21:15:34.549642     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.628441 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:46 old-k8s-version-127648 kubelet[665]: E0819 21:15:46.549479     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.628833 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:46 old-k8s-version-127648 kubelet[665]: E0819 21:15:46.551366     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.632353 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:59 old-k8s-version-127648 kubelet[665]: E0819 21:15:59.549085     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.632616 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:00 old-k8s-version-127648 kubelet[665]: E0819 21:16:00.554355     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.632858 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.549495     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.633216 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.550809     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.633561 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:28 old-k8s-version-127648 kubelet[665]: E0819 21:16:28.551255     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.633784 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:28 old-k8s-version-127648 kubelet[665]: E0819 21:16:28.549823     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.636275 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:39 old-k8s-version-127648 kubelet[665]: E0819 21:16:39.556947     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	I0819 21:16:41.636305 1351451 logs.go:123] Gathering logs for etcd [fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267] ...
	I0819 21:16:41.636337 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267"
	I0819 21:16:41.736897 1351451 logs.go:123] Gathering logs for kube-scheduler [c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4] ...
	I0819 21:16:41.736971 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4"
	I0819 21:16:41.824699 1351451 logs.go:123] Gathering logs for containerd ...
	I0819 21:16:41.824726 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 21:16:41.905377 1351451 out.go:358] Setting ErrFile to fd 2...
	I0819 21:16:41.905452 1351451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 21:16:41.905554 1351451 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 21:16:41.905600 1351451 out.go:270]   Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.549495     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.549495     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.905771 1351451 out.go:270]   Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.550809     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	  Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.550809     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.905806 1351451 out.go:270]   Aug 19 21:16:28 old-k8s-version-127648 kubelet[665]: E0819 21:16:28.551255     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 19 21:16:28 old-k8s-version-127648 kubelet[665]: E0819 21:16:28.551255     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.905859 1351451 out.go:270]   Aug 19 21:16:28 old-k8s-version-127648 kubelet[665]: E0819 21:16:28.549823     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	  Aug 19 21:16:28 old-k8s-version-127648 kubelet[665]: E0819 21:16:28.549823     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.905906 1351451 out.go:270]   Aug 19 21:16:39 old-k8s-version-127648 kubelet[665]: E0819 21:16:39.556947     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	  Aug 19 21:16:39 old-k8s-version-127648 kubelet[665]: E0819 21:16:39.556947     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	I0819 21:16:41.905968 1351451 out.go:358] Setting ErrFile to fd 2...
	I0819 21:16:41.905993 1351451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 21:16:51.906826 1351451 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0819 21:16:51.922052 1351451 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0819 21:16:51.923612 1351451 out.go:201] 
	W0819 21:16:51.924862 1351451 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0819 21:16:51.925047 1351451 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0819 21:16:51.925129 1351451 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0819 21:16:51.925214 1351451 out.go:270] * 
	* 
	W0819 21:16:51.926428 1351451 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 21:16:51.927283 1351451 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-127648 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-127648
helpers_test.go:235: (dbg) docker inspect old-k8s-version-127648:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "25bf339acbc438fb617fe77b6aae580c5c20e7366b20f55663fad012efdbe96f",
	        "Created": "2024-08-19T21:07:18.307660453Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1351802,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T21:10:32.551900663Z",
	            "FinishedAt": "2024-08-19T21:10:31.126169811Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/25bf339acbc438fb617fe77b6aae580c5c20e7366b20f55663fad012efdbe96f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/25bf339acbc438fb617fe77b6aae580c5c20e7366b20f55663fad012efdbe96f/hostname",
	        "HostsPath": "/var/lib/docker/containers/25bf339acbc438fb617fe77b6aae580c5c20e7366b20f55663fad012efdbe96f/hosts",
	        "LogPath": "/var/lib/docker/containers/25bf339acbc438fb617fe77b6aae580c5c20e7366b20f55663fad012efdbe96f/25bf339acbc438fb617fe77b6aae580c5c20e7366b20f55663fad012efdbe96f-json.log",
	        "Name": "/old-k8s-version-127648",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-127648:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-127648",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7663f8afeabfc8de23c79288e06286482060e83ccc32d878bbe716570d01503f-init/diff:/var/lib/docker/overlay2/56755d81a5447e9a4d21cbfbceb5eeee713182a8ca21fd0322f2eb2e99f83e1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7663f8afeabfc8de23c79288e06286482060e83ccc32d878bbe716570d01503f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7663f8afeabfc8de23c79288e06286482060e83ccc32d878bbe716570d01503f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7663f8afeabfc8de23c79288e06286482060e83ccc32d878bbe716570d01503f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-127648",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-127648/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-127648",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-127648",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-127648",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7e22ca9502a6e7c0ab1675d177df9f74021350a53b5d78a37287d9fc788bc5a0",
	            "SandboxKey": "/var/run/docker/netns/7e22ca9502a6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34223"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34224"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34227"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34225"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34226"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-127648": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "17c0eb13a6bc73a0a11142e085a84cb4b821e3ea4f2f0e538e5a55422621cfee",
	                    "EndpointID": "9e55644721d53fd545c666d6a59436c4b0061a7507f2a6416829aaed89dc0dda",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-127648",
	                        "25bf339acbc4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-127648 -n old-k8s-version-127648
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-127648 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-127648 logs -n 25: (2.826109866s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-557259                              | cert-expiration-557259   | jenkins | v1.33.1 | 19 Aug 24 21:06 UTC | 19 Aug 24 21:06 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-705767                               | force-systemd-env-705767 | jenkins | v1.33.1 | 19 Aug 24 21:06 UTC | 19 Aug 24 21:06 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-705767                            | force-systemd-env-705767 | jenkins | v1.33.1 | 19 Aug 24 21:06 UTC | 19 Aug 24 21:06 UTC |
	| start   | -p cert-options-913353                                 | cert-options-913353      | jenkins | v1.33.1 | 19 Aug 24 21:06 UTC | 19 Aug 24 21:07 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-913353 ssh                                | cert-options-913353      | jenkins | v1.33.1 | 19 Aug 24 21:07 UTC | 19 Aug 24 21:07 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-913353 -- sudo                         | cert-options-913353      | jenkins | v1.33.1 | 19 Aug 24 21:07 UTC | 19 Aug 24 21:07 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-913353                                 | cert-options-913353      | jenkins | v1.33.1 | 19 Aug 24 21:07 UTC | 19 Aug 24 21:07 UTC |
	| start   | -p old-k8s-version-127648                              | old-k8s-version-127648   | jenkins | v1.33.1 | 19 Aug 24 21:07 UTC | 19 Aug 24 21:10 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-557259                              | cert-expiration-557259   | jenkins | v1.33.1 | 19 Aug 24 21:09 UTC | 19 Aug 24 21:09 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-557259                              | cert-expiration-557259   | jenkins | v1.33.1 | 19 Aug 24 21:09 UTC | 19 Aug 24 21:10 UTC |
	| start   | -p no-preload-785099                                   | no-preload-785099        | jenkins | v1.33.1 | 19 Aug 24 21:10 UTC | 19 Aug 24 21:11 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-127648        | old-k8s-version-127648   | jenkins | v1.33.1 | 19 Aug 24 21:10 UTC | 19 Aug 24 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-127648                              | old-k8s-version-127648   | jenkins | v1.33.1 | 19 Aug 24 21:10 UTC | 19 Aug 24 21:10 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-127648             | old-k8s-version-127648   | jenkins | v1.33.1 | 19 Aug 24 21:10 UTC | 19 Aug 24 21:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-127648                              | old-k8s-version-127648   | jenkins | v1.33.1 | 19 Aug 24 21:10 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-785099             | no-preload-785099        | jenkins | v1.33.1 | 19 Aug 24 21:11 UTC | 19 Aug 24 21:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-785099                                   | no-preload-785099        | jenkins | v1.33.1 | 19 Aug 24 21:11 UTC | 19 Aug 24 21:11 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-785099                  | no-preload-785099        | jenkins | v1.33.1 | 19 Aug 24 21:11 UTC | 19 Aug 24 21:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-785099                                   | no-preload-785099        | jenkins | v1.33.1 | 19 Aug 24 21:11 UTC | 19 Aug 24 21:16 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	| image   | no-preload-785099 image list                           | no-preload-785099        | jenkins | v1.33.1 | 19 Aug 24 21:16 UTC | 19 Aug 24 21:16 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-785099                                   | no-preload-785099        | jenkins | v1.33.1 | 19 Aug 24 21:16 UTC | 19 Aug 24 21:16 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-785099                                   | no-preload-785099        | jenkins | v1.33.1 | 19 Aug 24 21:16 UTC | 19 Aug 24 21:16 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-785099                                   | no-preload-785099        | jenkins | v1.33.1 | 19 Aug 24 21:16 UTC | 19 Aug 24 21:16 UTC |
	| delete  | -p no-preload-785099                                   | no-preload-785099        | jenkins | v1.33.1 | 19 Aug 24 21:16 UTC | 19 Aug 24 21:16 UTC |
	| start   | -p embed-certs-249735                                  | embed-certs-249735       | jenkins | v1.33.1 | 19 Aug 24 21:16 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 21:16:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 21:16:23.688954 1361984 out.go:345] Setting OutFile to fd 1 ...
	I0819 21:16:23.689094 1361984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 21:16:23.689105 1361984 out.go:358] Setting ErrFile to fd 2...
	I0819 21:16:23.689110 1361984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 21:16:23.689358 1361984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
	I0819 21:16:23.689773 1361984 out.go:352] Setting JSON to false
	I0819 21:16:23.690813 1361984 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17930,"bootTime":1724084253,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 21:16:23.690894 1361984 start.go:139] virtualization:  
	I0819 21:16:23.694187 1361984 out.go:177] * [embed-certs-249735] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 21:16:23.697575 1361984 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 21:16:23.697649 1361984 notify.go:220] Checking for updates...
	I0819 21:16:23.702814 1361984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 21:16:23.705441 1361984 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	I0819 21:16:23.708105 1361984 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	I0819 21:16:23.710734 1361984 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 21:16:23.713293 1361984 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 21:16:23.716445 1361984 config.go:182] Loaded profile config "old-k8s-version-127648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0819 21:16:23.716547 1361984 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 21:16:23.741068 1361984 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 21:16:23.741182 1361984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 21:16:23.818298 1361984 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 21:16:23.807700033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 21:16:23.818424 1361984 docker.go:307] overlay module found
	I0819 21:16:23.821419 1361984 out.go:177] * Using the docker driver based on user configuration
	I0819 21:16:23.823970 1361984 start.go:297] selected driver: docker
	I0819 21:16:23.823986 1361984 start.go:901] validating driver "docker" against <nil>
	I0819 21:16:23.824000 1361984 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 21:16:23.824733 1361984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 21:16:23.871438 1361984 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 21:16:23.862037674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 21:16:23.871605 1361984 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 21:16:23.871843 1361984 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 21:16:23.874471 1361984 out.go:177] * Using Docker driver with root privileges
	I0819 21:16:23.877158 1361984 cni.go:84] Creating CNI manager for ""
	I0819 21:16:23.877192 1361984 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 21:16:23.877203 1361984 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 21:16:23.877289 1361984 start.go:340] cluster config:
	{Name:embed-certs-249735 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-249735 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 21:16:23.880143 1361984 out.go:177] * Starting "embed-certs-249735" primary control-plane node in "embed-certs-249735" cluster
	I0819 21:16:23.882702 1361984 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 21:16:23.885412 1361984 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 21:16:23.888184 1361984 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 21:16:23.888278 1361984 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-1139612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0819 21:16:23.888276 1361984 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 21:16:23.888307 1361984 cache.go:56] Caching tarball of preloaded images
	I0819 21:16:23.888398 1361984 preload.go:172] Found /home/jenkins/minikube-integration/19423-1139612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 21:16:23.888409 1361984 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0819 21:16:23.888514 1361984 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/config.json ...
	I0819 21:16:23.888539 1361984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/config.json: {Name:mk091da1bb8ac7d79dd35dbce043f4eda208eed5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0819 21:16:23.915523 1361984 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0819 21:16:23.915546 1361984 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 21:16:23.915620 1361984 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 21:16:23.915643 1361984 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 21:16:23.915652 1361984 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 21:16:23.915660 1361984 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 21:16:23.915669 1361984 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 21:16:24.045426 1361984 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 21:16:24.045473 1361984 cache.go:194] Successfully downloaded all kic artifacts
	I0819 21:16:24.045519 1361984 start.go:360] acquireMachinesLock for embed-certs-249735: {Name:mk380c76ca7ca6e03f15ca8a66f61e37f7b1c31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 21:16:24.046075 1361984 start.go:364] duration metric: took 529.916µs to acquireMachinesLock for "embed-certs-249735"
	I0819 21:16:24.046117 1361984 start.go:93] Provisioning new machine with config: &{Name:embed-certs-249735 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-249735 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 21:16:24.046217 1361984 start.go:125] createHost starting for "" (driver="docker")
	I0819 21:16:22.175433 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:24.181379 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:26.675944 1351451 pod_ready.go:103] pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace has status "Ready":"False"
	I0819 21:16:26.675980 1351451 pod_ready.go:82] duration metric: took 4m0.007787366s for pod "metrics-server-9975d5f86-4glzw" in "kube-system" namespace to be "Ready" ...
	E0819 21:16:26.675991 1351451 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 21:16:26.676030 1351451 pod_ready.go:39] duration metric: took 5m28.928635211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 21:16:26.676055 1351451 api_server.go:52] waiting for apiserver process to appear ...
	I0819 21:16:26.676101 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 21:16:26.676201 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 21:16:26.722017 1351451 cri.go:89] found id: "863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e"
	I0819 21:16:26.722043 1351451 cri.go:89] found id: "ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce"
	I0819 21:16:26.722049 1351451 cri.go:89] found id: ""
	I0819 21:16:26.722056 1351451 logs.go:276] 2 containers: [863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce]
	I0819 21:16:26.722117 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.725648 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.729023 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 21:16:26.729144 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 21:16:26.770425 1351451 cri.go:89] found id: "fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267"
	I0819 21:16:26.770446 1351451 cri.go:89] found id: "de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5"
	I0819 21:16:26.770451 1351451 cri.go:89] found id: ""
	I0819 21:16:26.770458 1351451 logs.go:276] 2 containers: [fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267 de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5]
	I0819 21:16:26.770516 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.774330 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.777988 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 21:16:26.778056 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 21:16:24.049412 1361984 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0819 21:16:24.049684 1361984 start.go:159] libmachine.API.Create for "embed-certs-249735" (driver="docker")
	I0819 21:16:24.049723 1361984 client.go:168] LocalClient.Create starting
	I0819 21:16:24.049797 1361984 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem
	I0819 21:16:24.049841 1361984 main.go:141] libmachine: Decoding PEM data...
	I0819 21:16:24.049862 1361984 main.go:141] libmachine: Parsing certificate...
	I0819 21:16:24.049929 1361984 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/cert.pem
	I0819 21:16:24.049952 1361984 main.go:141] libmachine: Decoding PEM data...
	I0819 21:16:24.049962 1361984 main.go:141] libmachine: Parsing certificate...
	I0819 21:16:24.050369 1361984 cli_runner.go:164] Run: docker network inspect embed-certs-249735 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 21:16:24.067497 1361984 cli_runner.go:211] docker network inspect embed-certs-249735 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 21:16:24.067610 1361984 network_create.go:284] running [docker network inspect embed-certs-249735] to gather additional debugging logs...
	I0819 21:16:24.067632 1361984 cli_runner.go:164] Run: docker network inspect embed-certs-249735
	W0819 21:16:24.085046 1361984 cli_runner.go:211] docker network inspect embed-certs-249735 returned with exit code 1
	I0819 21:16:24.085083 1361984 network_create.go:287] error running [docker network inspect embed-certs-249735]: docker network inspect embed-certs-249735: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-249735 not found
	I0819 21:16:24.085099 1361984 network_create.go:289] output of [docker network inspect embed-certs-249735]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-249735 not found
	
	** /stderr **
	I0819 21:16:24.085224 1361984 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 21:16:24.102075 1361984 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-459ee00191af IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:04:b1:ec:76} reservation:<nil>}
	I0819 21:16:24.102464 1361984 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c806d94ebfcc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:63:2d:99:b8} reservation:<nil>}
	I0819 21:16:24.102799 1361984 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-510cc45fc900 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:7d:71:58:69} reservation:<nil>}
	I0819 21:16:24.103287 1361984 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001876470}
	I0819 21:16:24.103314 1361984 network_create.go:124] attempt to create docker network embed-certs-249735 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0819 21:16:24.103379 1361984 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-249735 embed-certs-249735
	I0819 21:16:24.178012 1361984 network_create.go:108] docker network embed-certs-249735 192.168.76.0/24 created
	I0819 21:16:24.178046 1361984 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-249735" container
	I0819 21:16:24.178174 1361984 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 21:16:24.206593 1361984 cli_runner.go:164] Run: docker volume create embed-certs-249735 --label name.minikube.sigs.k8s.io=embed-certs-249735 --label created_by.minikube.sigs.k8s.io=true
	I0819 21:16:24.230425 1361984 oci.go:103] Successfully created a docker volume embed-certs-249735
	I0819 21:16:24.230506 1361984 cli_runner.go:164] Run: docker run --rm --name embed-certs-249735-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-249735 --entrypoint /usr/bin/test -v embed-certs-249735:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0819 21:16:24.927017 1361984 oci.go:107] Successfully prepared a docker volume embed-certs-249735
	I0819 21:16:24.927060 1361984 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 21:16:24.927081 1361984 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 21:16:24.927174 1361984 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-1139612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-249735:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 21:16:26.831135 1351451 cri.go:89] found id: "1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e"
	I0819 21:16:26.831205 1351451 cri.go:89] found id: "f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657"
	I0819 21:16:26.831224 1351451 cri.go:89] found id: ""
	I0819 21:16:26.831251 1351451 logs.go:276] 2 containers: [1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657]
	I0819 21:16:26.831331 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.835121 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.838706 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 21:16:26.838823 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 21:16:26.885971 1351451 cri.go:89] found id: "c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4"
	I0819 21:16:26.886020 1351451 cri.go:89] found id: "62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a"
	I0819 21:16:26.886043 1351451 cri.go:89] found id: ""
	I0819 21:16:26.886073 1351451 logs.go:276] 2 containers: [c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4 62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a]
	I0819 21:16:26.886149 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.890432 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.894051 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 21:16:26.894167 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 21:16:26.940512 1351451 cri.go:89] found id: "3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a"
	I0819 21:16:26.940585 1351451 cri.go:89] found id: "63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e"
	I0819 21:16:26.940605 1351451 cri.go:89] found id: ""
	I0819 21:16:26.940633 1351451 logs.go:276] 2 containers: [3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a 63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e]
	I0819 21:16:26.940708 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.944532 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:26.947982 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 21:16:26.948103 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 21:16:27.064038 1351451 cri.go:89] found id: "a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363"
	I0819 21:16:27.064065 1351451 cri.go:89] found id: "7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525"
	I0819 21:16:27.064071 1351451 cri.go:89] found id: ""
	I0819 21:16:27.064079 1351451 logs.go:276] 2 containers: [a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363 7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525]
	I0819 21:16:27.064160 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:27.075192 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:27.084336 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 21:16:27.084446 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 21:16:27.209506 1351451 cri.go:89] found id: "2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20"
	I0819 21:16:27.209543 1351451 cri.go:89] found id: "aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e"
	I0819 21:16:27.209548 1351451 cri.go:89] found id: ""
	I0819 21:16:27.209556 1351451 logs.go:276] 2 containers: [2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20 aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e]
	I0819 21:16:27.209653 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:27.214678 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:27.218252 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 21:16:27.218352 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 21:16:27.270015 1351451 cri.go:89] found id: "18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1"
	I0819 21:16:27.270059 1351451 cri.go:89] found id: ""
	I0819 21:16:27.270067 1351451 logs.go:276] 1 containers: [18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1]
	I0819 21:16:27.270165 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:27.274425 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 21:16:27.274551 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 21:16:27.324064 1351451 cri.go:89] found id: "3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e"
	I0819 21:16:27.324100 1351451 cri.go:89] found id: "556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd"
	I0819 21:16:27.324105 1351451 cri.go:89] found id: ""
	I0819 21:16:27.324112 1351451 logs.go:276] 2 containers: [3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e 556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd]
	I0819 21:16:27.324194 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:27.351328 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:27.355700 1351451 logs.go:123] Gathering logs for coredns [f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657] ...
	I0819 21:16:27.355736 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657"
	I0819 21:16:27.406666 1351451 logs.go:123] Gathering logs for kube-scheduler [62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a] ...
	I0819 21:16:27.406696 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a"
	I0819 21:16:27.461106 1351451 logs.go:123] Gathering logs for kube-proxy [3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a] ...
	I0819 21:16:27.461139 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a"
	I0819 21:16:27.513081 1351451 logs.go:123] Gathering logs for storage-provisioner [556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd] ...
	I0819 21:16:27.513107 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd"
	I0819 21:16:27.570112 1351451 logs.go:123] Gathering logs for kubelet ...
	I0819 21:16:27.570155 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 21:16:27.636753 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.621751     665 reflector.go:138] object-"kube-system"/"kindnet-token-xptkw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xptkw" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:27.637011 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622026     665 reflector.go:138] object-"kube-system"/"coredns-token-jj86v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-jj86v" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:27.637232 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622171     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:27.637539 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622316     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-ssctg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-ssctg" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:27.637752 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622453     665 reflector.go:138] object-"default"/"default-token-vbtl7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-vbtl7" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:27.637956 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.671400     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:27.638181 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.671660     665 reflector.go:138] object-"kube-system"/"metrics-server-token-x764s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-x764s" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:27.646452 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:00 old-k8s-version-127648 kubelet[665]: E0819 21:11:00.574230     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:27.646679 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:00 old-k8s-version-127648 kubelet[665]: E0819 21:11:00.789676     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.650451 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:15 old-k8s-version-127648 kubelet[665]: E0819 21:11:15.561064     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:27.652683 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:22 old-k8s-version-127648 kubelet[665]: E0819 21:11:22.882357     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.653041 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:23 old-k8s-version-127648 kubelet[665]: E0819 21:11:23.880032     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.653248 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:27 old-k8s-version-127648 kubelet[665]: E0819 21:11:27.549335     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.654048 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:30 old-k8s-version-127648 kubelet[665]: E0819 21:11:30.903604     665 pod_workers.go:191] Error syncing pod 74e4d116-4e4e-4dc5-af07-3013282e840a ("storage-provisioner_kube-system(74e4d116-4e4e-4dc5-af07-3013282e840a)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(74e4d116-4e4e-4dc5-af07-3013282e840a)"
	W0819 21:16:27.654403 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:31 old-k8s-version-127648 kubelet[665]: E0819 21:11:31.736669     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.657331 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:41 old-k8s-version-127648 kubelet[665]: E0819 21:11:41.559000     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:27.658003 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:45 old-k8s-version-127648 kubelet[665]: E0819 21:11:45.947294     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.658476 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:51 old-k8s-version-127648 kubelet[665]: E0819 21:11:51.747256     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.658664 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:54 old-k8s-version-127648 kubelet[665]: E0819 21:11:54.549415     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.658852 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:06 old-k8s-version-127648 kubelet[665]: E0819 21:12:06.549632     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.659552 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:08 old-k8s-version-127648 kubelet[665]: E0819 21:12:08.101007     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.659888 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:11 old-k8s-version-127648 kubelet[665]: E0819 21:12:11.736495     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.660109 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:18 old-k8s-version-127648 kubelet[665]: E0819 21:12:18.550570     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.660468 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:22 old-k8s-version-127648 kubelet[665]: E0819 21:12:22.548984     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.663052 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:29 old-k8s-version-127648 kubelet[665]: E0819 21:12:29.557879     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:27.663411 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:35 old-k8s-version-127648 kubelet[665]: E0819 21:12:35.549004     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.663625 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:41 old-k8s-version-127648 kubelet[665]: E0819 21:12:41.549476     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.663977 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:47 old-k8s-version-127648 kubelet[665]: E0819 21:12:47.548954     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.664192 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:56 old-k8s-version-127648 kubelet[665]: E0819 21:12:56.550012     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.664817 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:59 old-k8s-version-127648 kubelet[665]: E0819 21:12:59.243984     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.665194 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:01 old-k8s-version-127648 kubelet[665]: E0819 21:13:01.736702     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.665402 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:08 old-k8s-version-127648 kubelet[665]: E0819 21:13:08.550071     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.665753 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:15 old-k8s-version-127648 kubelet[665]: E0819 21:13:15.549079     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.666078 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:21 old-k8s-version-127648 kubelet[665]: E0819 21:13:21.549597     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.666481 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:27 old-k8s-version-127648 kubelet[665]: E0819 21:13:27.549537     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.666691 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:36 old-k8s-version-127648 kubelet[665]: E0819 21:13:36.549526     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.667050 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:39 old-k8s-version-127648 kubelet[665]: E0819 21:13:39.549198     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.669612 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:50 old-k8s-version-127648 kubelet[665]: E0819 21:13:50.557660     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:27.670005 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:52 old-k8s-version-127648 kubelet[665]: E0819 21:13:52.549116     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.670216 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:01 old-k8s-version-127648 kubelet[665]: E0819 21:14:01.549487     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.670581 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:06 old-k8s-version-127648 kubelet[665]: E0819 21:14:06.548964     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.670789 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:16 old-k8s-version-127648 kubelet[665]: E0819 21:14:16.553951     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.671457 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:20 old-k8s-version-127648 kubelet[665]: E0819 21:14:20.469865     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.671816 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:21 old-k8s-version-127648 kubelet[665]: E0819 21:14:21.736453     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.672060 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:31 old-k8s-version-127648 kubelet[665]: E0819 21:14:31.549363     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.672464 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:36 old-k8s-version-127648 kubelet[665]: E0819 21:14:36.549968     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.672679 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:46 old-k8s-version-127648 kubelet[665]: E0819 21:14:46.549614     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.673041 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:48 old-k8s-version-127648 kubelet[665]: E0819 21:14:48.551815     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.673268 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:57 old-k8s-version-127648 kubelet[665]: E0819 21:14:57.549314     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.673723 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:03 old-k8s-version-127648 kubelet[665]: E0819 21:15:03.548948     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.673940 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:08 old-k8s-version-127648 kubelet[665]: E0819 21:15:08.550121     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.674342 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:18 old-k8s-version-127648 kubelet[665]: E0819 21:15:18.549558     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.674597 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:19 old-k8s-version-127648 kubelet[665]: E0819 21:15:19.549582     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.675013 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:31 old-k8s-version-127648 kubelet[665]: E0819 21:15:31.549167     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.675241 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:34 old-k8s-version-127648 kubelet[665]: E0819 21:15:34.549642     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.675558 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:46 old-k8s-version-127648 kubelet[665]: E0819 21:15:46.549479     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.675963 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:46 old-k8s-version-127648 kubelet[665]: E0819 21:15:46.551366     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.676343 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:59 old-k8s-version-127648 kubelet[665]: E0819 21:15:59.549085     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:27.676553 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:00 old-k8s-version-127648 kubelet[665]: E0819 21:16:00.554355     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.676780 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.549495     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:27.677133 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.550809     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	I0819 21:16:27.677148 1351451 logs.go:123] Gathering logs for kube-apiserver [863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e] ...
	I0819 21:16:27.677163 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e"
	I0819 21:16:27.783187 1351451 logs.go:123] Gathering logs for kube-apiserver [ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce] ...
	I0819 21:16:27.783234 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce"
	I0819 21:16:27.878025 1351451 logs.go:123] Gathering logs for etcd [fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267] ...
	I0819 21:16:27.878059 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267"
	I0819 21:16:27.971482 1351451 logs.go:123] Gathering logs for kubernetes-dashboard [18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1] ...
	I0819 21:16:27.971509 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1"
	I0819 21:16:28.026406 1351451 logs.go:123] Gathering logs for container status ...
	I0819 21:16:28.026441 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 21:16:28.077604 1351451 logs.go:123] Gathering logs for dmesg ...
	I0819 21:16:28.077635 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 21:16:28.106351 1351451 logs.go:123] Gathering logs for describe nodes ...
	I0819 21:16:28.106383 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 21:16:28.314793 1351451 logs.go:123] Gathering logs for etcd [de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5] ...
	I0819 21:16:28.314818 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5"
	I0819 21:16:28.388732 1351451 logs.go:123] Gathering logs for kindnet [aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e] ...
	I0819 21:16:28.388809 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e"
	I0819 21:16:28.461767 1351451 logs.go:123] Gathering logs for kube-controller-manager [7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525] ...
	I0819 21:16:28.461855 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525"
	I0819 21:16:28.551587 1351451 logs.go:123] Gathering logs for kindnet [2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20] ...
	I0819 21:16:28.551688 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20"
	I0819 21:16:28.653556 1351451 logs.go:123] Gathering logs for storage-provisioner [3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e] ...
	I0819 21:16:28.653593 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e"
	I0819 21:16:28.695383 1351451 logs.go:123] Gathering logs for containerd ...
	I0819 21:16:28.695411 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 21:16:28.762252 1351451 logs.go:123] Gathering logs for coredns [1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e] ...
	I0819 21:16:28.762289 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e"
	I0819 21:16:28.820470 1351451 logs.go:123] Gathering logs for kube-scheduler [c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4] ...
	I0819 21:16:28.820516 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4"
	I0819 21:16:28.873663 1351451 logs.go:123] Gathering logs for kube-proxy [63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e] ...
	I0819 21:16:28.873701 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e"
	I0819 21:16:28.917142 1351451 logs.go:123] Gathering logs for kube-controller-manager [a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363] ...
	I0819 21:16:28.917183 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363"
	I0819 21:16:28.979164 1351451 out.go:358] Setting ErrFile to fd 2...
	I0819 21:16:28.979196 1351451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 21:16:28.979271 1351451 out.go:270] X Problems detected in kubelet:
	W0819 21:16:28.979288 1351451 out.go:270]   Aug 19 21:15:46 old-k8s-version-127648 kubelet[665]: E0819 21:15:46.551366     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:28.979296 1351451 out.go:270]   Aug 19 21:15:59 old-k8s-version-127648 kubelet[665]: E0819 21:15:59.549085     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:28.979318 1351451 out.go:270]   Aug 19 21:16:00 old-k8s-version-127648 kubelet[665]: E0819 21:16:00.554355     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:28.979328 1351451 out.go:270]   Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.549495     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:28.979333 1351451 out.go:270]   Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.550809     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	I0819 21:16:28.979346 1351451 out.go:358] Setting ErrFile to fd 2...
	I0819 21:16:28.979354 1351451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 21:16:29.925889 1361984 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-1139612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-249735:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.998676077s)
	I0819 21:16:29.925922 1361984 kic.go:203] duration metric: took 4.998834842s to extract preloaded images to volume ...
	W0819 21:16:29.926081 1361984 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 21:16:29.926238 1361984 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 21:16:29.978802 1361984 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-249735 --name embed-certs-249735 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-249735 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-249735 --network embed-certs-249735 --ip 192.168.76.2 --volume embed-certs-249735:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0819 21:16:30.412373 1361984 cli_runner.go:164] Run: docker container inspect embed-certs-249735 --format={{.State.Running}}
	I0819 21:16:30.434243 1361984 cli_runner.go:164] Run: docker container inspect embed-certs-249735 --format={{.State.Status}}
	I0819 21:16:30.454077 1361984 cli_runner.go:164] Run: docker exec embed-certs-249735 stat /var/lib/dpkg/alternatives/iptables
	I0819 21:16:30.537668 1361984 oci.go:144] the created container "embed-certs-249735" has a running status.
	I0819 21:16:30.537700 1361984 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19423-1139612/.minikube/machines/embed-certs-249735/id_rsa...
	I0819 21:16:31.393136 1361984 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19423-1139612/.minikube/machines/embed-certs-249735/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 21:16:31.416814 1361984 cli_runner.go:164] Run: docker container inspect embed-certs-249735 --format={{.State.Status}}
	I0819 21:16:31.442058 1361984 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 21:16:31.442077 1361984 kic_runner.go:114] Args: [docker exec --privileged embed-certs-249735 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 21:16:31.510492 1361984 cli_runner.go:164] Run: docker container inspect embed-certs-249735 --format={{.State.Status}}
	I0819 21:16:31.530279 1361984 machine.go:93] provisionDockerMachine start ...
	I0819 21:16:31.530366 1361984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-249735
	I0819 21:16:31.551910 1361984 main.go:141] libmachine: Using SSH client type: native
	I0819 21:16:31.552189 1361984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I0819 21:16:31.552198 1361984 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 21:16:31.692545 1361984 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-249735
	
	I0819 21:16:31.692632 1361984 ubuntu.go:169] provisioning hostname "embed-certs-249735"
	I0819 21:16:31.692730 1361984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-249735
	I0819 21:16:31.714398 1361984 main.go:141] libmachine: Using SSH client type: native
	I0819 21:16:31.714645 1361984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I0819 21:16:31.714656 1361984 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-249735 && echo "embed-certs-249735" | sudo tee /etc/hostname
	I0819 21:16:31.864916 1361984 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-249735
	
	I0819 21:16:31.865067 1361984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-249735
	I0819 21:16:31.888316 1361984 main.go:141] libmachine: Using SSH client type: native
	I0819 21:16:31.888576 1361984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I0819 21:16:31.888599 1361984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-249735' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-249735/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-249735' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 21:16:32.031542 1361984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 21:16:32.031603 1361984 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19423-1139612/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-1139612/.minikube}
	I0819 21:16:32.031642 1361984 ubuntu.go:177] setting up certificates
	I0819 21:16:32.031655 1361984 provision.go:84] configureAuth start
	I0819 21:16:32.031736 1361984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-249735
	I0819 21:16:32.051745 1361984 provision.go:143] copyHostCerts
	I0819 21:16:32.051818 1361984 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.pem, removing ...
	I0819 21:16:32.051835 1361984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.pem
	I0819 21:16:32.051915 1361984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.pem (1078 bytes)
	I0819 21:16:32.052031 1361984 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1139612/.minikube/cert.pem, removing ...
	I0819 21:16:32.052042 1361984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1139612/.minikube/cert.pem
	I0819 21:16:32.052075 1361984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-1139612/.minikube/cert.pem (1123 bytes)
	I0819 21:16:32.052173 1361984 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1139612/.minikube/key.pem, removing ...
	I0819 21:16:32.052183 1361984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1139612/.minikube/key.pem
	I0819 21:16:32.052210 1361984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-1139612/.minikube/key.pem (1675 bytes)
	I0819 21:16:32.052323 1361984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca-key.pem org=jenkins.embed-certs-249735 san=[127.0.0.1 192.168.76.2 embed-certs-249735 localhost minikube]
	I0819 21:16:32.312491 1361984 provision.go:177] copyRemoteCerts
	I0819 21:16:32.312566 1361984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 21:16:32.312616 1361984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-249735
	I0819 21:16:32.339108 1361984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/embed-certs-249735/id_rsa Username:docker}
	I0819 21:16:32.437580 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 21:16:32.463439 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 21:16:32.489345 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 21:16:32.516829 1361984 provision.go:87] duration metric: took 485.153146ms to configureAuth
	I0819 21:16:32.516862 1361984 ubuntu.go:193] setting minikube options for container-runtime
	I0819 21:16:32.517053 1361984 config.go:182] Loaded profile config "embed-certs-249735": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 21:16:32.517069 1361984 machine.go:96] duration metric: took 986.773095ms to provisionDockerMachine
	I0819 21:16:32.517076 1361984 client.go:171] duration metric: took 8.467343131s to LocalClient.Create
	I0819 21:16:32.517096 1361984 start.go:167] duration metric: took 8.467414702s to libmachine.API.Create "embed-certs-249735"
	I0819 21:16:32.517126 1361984 start.go:293] postStartSetup for "embed-certs-249735" (driver="docker")
	I0819 21:16:32.517141 1361984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 21:16:32.517200 1361984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 21:16:32.517250 1361984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-249735
	I0819 21:16:32.535123 1361984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/embed-certs-249735/id_rsa Username:docker}
	I0819 21:16:32.633953 1361984 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 21:16:32.637222 1361984 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 21:16:32.637259 1361984 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 21:16:32.637273 1361984 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 21:16:32.637279 1361984 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 21:16:32.637294 1361984 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1139612/.minikube/addons for local assets ...
	I0819 21:16:32.637353 1361984 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1139612/.minikube/files for local assets ...
	I0819 21:16:32.637434 1361984 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-1139612/.minikube/files/etc/ssl/certs/11450182.pem -> 11450182.pem in /etc/ssl/certs
	I0819 21:16:32.637540 1361984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 21:16:32.646017 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/files/etc/ssl/certs/11450182.pem --> /etc/ssl/certs/11450182.pem (1708 bytes)
	I0819 21:16:32.670914 1361984 start.go:296] duration metric: took 153.768569ms for postStartSetup
	I0819 21:16:32.671346 1361984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-249735
	I0819 21:16:32.688807 1361984 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/config.json ...
	I0819 21:16:32.689103 1361984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 21:16:32.689158 1361984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-249735
	I0819 21:16:32.706258 1361984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/embed-certs-249735/id_rsa Username:docker}
	I0819 21:16:32.797747 1361984 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 21:16:32.802724 1361984 start.go:128] duration metric: took 8.756490797s to createHost
	I0819 21:16:32.802748 1361984 start.go:83] releasing machines lock for "embed-certs-249735", held for 8.756654157s
	I0819 21:16:32.802837 1361984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-249735
	I0819 21:16:32.828533 1361984 ssh_runner.go:195] Run: cat /version.json
	I0819 21:16:32.828671 1361984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-249735
	I0819 21:16:32.829048 1361984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 21:16:32.829198 1361984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-249735
	I0819 21:16:32.860188 1361984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/embed-certs-249735/id_rsa Username:docker}
	I0819 21:16:32.873332 1361984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/embed-certs-249735/id_rsa Username:docker}
	I0819 21:16:33.086123 1361984 ssh_runner.go:195] Run: systemctl --version
	I0819 21:16:33.091002 1361984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 21:16:33.095690 1361984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0819 21:16:33.121618 1361984 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0819 21:16:33.121741 1361984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 21:16:33.151086 1361984 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 21:16:33.151110 1361984 start.go:495] detecting cgroup driver to use...
	I0819 21:16:33.151164 1361984 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 21:16:33.151240 1361984 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 21:16:33.164664 1361984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 21:16:33.176309 1361984 docker.go:217] disabling cri-docker service (if available) ...
	I0819 21:16:33.176418 1361984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 21:16:33.191433 1361984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 21:16:33.205817 1361984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 21:16:33.321871 1361984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 21:16:33.427827 1361984 docker.go:233] disabling docker service ...
	I0819 21:16:33.427951 1361984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 21:16:33.450591 1361984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 21:16:33.464361 1361984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 21:16:33.560103 1361984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 21:16:33.661097 1361984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 21:16:33.673253 1361984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 21:16:33.691821 1361984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 21:16:33.703033 1361984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 21:16:33.714582 1361984 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 21:16:33.714707 1361984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 21:16:33.731723 1361984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 21:16:33.743041 1361984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 21:16:33.753762 1361984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 21:16:33.765315 1361984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 21:16:33.775088 1361984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 21:16:33.786582 1361984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 21:16:33.797912 1361984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 21:16:33.808931 1361984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 21:16:33.817923 1361984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 21:16:33.827796 1361984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 21:16:33.921651 1361984 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 21:16:34.082490 1361984 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0819 21:16:34.082594 1361984 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0819 21:16:34.086454 1361984 start.go:563] Will wait 60s for crictl version
	I0819 21:16:34.086540 1361984 ssh_runner.go:195] Run: which crictl
	I0819 21:16:34.089949 1361984 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 21:16:34.128968 1361984 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0819 21:16:34.129067 1361984 ssh_runner.go:195] Run: containerd --version
	I0819 21:16:34.151719 1361984 ssh_runner.go:195] Run: containerd --version
	I0819 21:16:34.184490 1361984 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0819 21:16:34.185560 1361984 cli_runner.go:164] Run: docker network inspect embed-certs-249735 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 21:16:34.203095 1361984 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0819 21:16:34.206776 1361984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 21:16:34.217890 1361984 kubeadm.go:883] updating cluster {Name:embed-certs-249735 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-249735 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 21:16:34.218026 1361984 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 21:16:34.218094 1361984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 21:16:34.256017 1361984 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 21:16:34.256043 1361984 containerd.go:534] Images already preloaded, skipping extraction
	I0819 21:16:34.256104 1361984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 21:16:34.293259 1361984 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 21:16:34.293283 1361984 cache_images.go:84] Images are preloaded, skipping loading
	I0819 21:16:34.293291 1361984 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.0 containerd true true} ...
	I0819 21:16:34.293398 1361984 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-249735 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-249735 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 21:16:34.293471 1361984 ssh_runner.go:195] Run: sudo crictl info
	I0819 21:16:34.342565 1361984 cni.go:84] Creating CNI manager for ""
	I0819 21:16:34.342590 1361984 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 21:16:34.342602 1361984 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 21:16:34.342623 1361984 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-249735 NodeName:embed-certs-249735 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 21:16:34.342762 1361984 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-249735"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 21:16:34.342833 1361984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 21:16:34.352028 1361984 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 21:16:34.352101 1361984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 21:16:34.361143 1361984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0819 21:16:34.379510 1361984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 21:16:34.398600 1361984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0819 21:16:34.417492 1361984 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0819 21:16:34.421028 1361984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 21:16:34.431797 1361984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 21:16:34.527480 1361984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 21:16:34.543800 1361984 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735 for IP: 192.168.76.2
	I0819 21:16:34.543824 1361984 certs.go:194] generating shared ca certs ...
	I0819 21:16:34.543840 1361984 certs.go:226] acquiring lock for ca certs: {Name:mk862c79d80b8fe3a5df83b1592928b3403a862f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 21:16:34.543974 1361984 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.key
	I0819 21:16:34.544023 1361984 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/proxy-client-ca.key
	I0819 21:16:34.544035 1361984 certs.go:256] generating profile certs ...
	I0819 21:16:34.544089 1361984 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/client.key
	I0819 21:16:34.544118 1361984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/client.crt with IP's: []
	I0819 21:16:35.228563 1361984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/client.crt ...
	I0819 21:16:35.228598 1361984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/client.crt: {Name:mk3039998ddd64670b8d8c36ece867d672c92078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 21:16:35.229385 1361984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/client.key ...
	I0819 21:16:35.229449 1361984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/client.key: {Name:mk7d6a4f9e86da33a92a3b7b846354f6e85f664d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 21:16:35.229588 1361984 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/apiserver.key.70e1ab11
	I0819 21:16:35.229609 1361984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/apiserver.crt.70e1ab11 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0819 21:16:35.569294 1361984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/apiserver.crt.70e1ab11 ...
	I0819 21:16:35.569330 1361984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/apiserver.crt.70e1ab11: {Name:mkf3dfd1b66057abbdcb1ef642727c27c969823c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 21:16:35.569957 1361984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/apiserver.key.70e1ab11 ...
	I0819 21:16:35.569979 1361984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/apiserver.key.70e1ab11: {Name:mk6d37692220f36176e8d97f324c59aa261cb941 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 21:16:35.570112 1361984 certs.go:381] copying /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/apiserver.crt.70e1ab11 -> /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/apiserver.crt
	I0819 21:16:35.570206 1361984 certs.go:385] copying /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/apiserver.key.70e1ab11 -> /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/apiserver.key
	I0819 21:16:35.570269 1361984 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/proxy-client.key
	I0819 21:16:35.570288 1361984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/proxy-client.crt with IP's: []
	I0819 21:16:35.743315 1361984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/proxy-client.crt ...
	I0819 21:16:35.743349 1361984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/proxy-client.crt: {Name:mka563ffb64459326accfd49ffbebb47afa23d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 21:16:35.743999 1361984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/proxy-client.key ...
	I0819 21:16:35.744022 1361984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/proxy-client.key: {Name:mk21bfe409f966bfc694d12b27b2463fb04e8b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 21:16:35.744818 1361984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/1145018.pem (1338 bytes)
	W0819 21:16:35.744898 1361984 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/1145018_empty.pem, impossibly tiny 0 bytes
	I0819 21:16:35.744927 1361984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 21:16:35.744991 1361984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/ca.pem (1078 bytes)
	I0819 21:16:35.745042 1361984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/cert.pem (1123 bytes)
	I0819 21:16:35.745096 1361984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/key.pem (1675 bytes)
	I0819 21:16:35.745205 1361984 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1139612/.minikube/files/etc/ssl/certs/11450182.pem (1708 bytes)
	I0819 21:16:35.745968 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 21:16:35.774708 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 21:16:35.802896 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 21:16:35.830615 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 21:16:35.859453 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 21:16:35.885068 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 21:16:35.910238 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 21:16:35.935909 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/embed-certs-249735/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 21:16:35.960468 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/files/etc/ssl/certs/11450182.pem --> /usr/share/ca-certificates/11450182.pem (1708 bytes)
	I0819 21:16:35.987046 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 21:16:36.032647 1361984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1139612/.minikube/certs/1145018.pem --> /usr/share/ca-certificates/1145018.pem (1338 bytes)
	I0819 21:16:36.062154 1361984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 21:16:36.084930 1361984 ssh_runner.go:195] Run: openssl version
	I0819 21:16:36.091304 1361984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11450182.pem && ln -fs /usr/share/ca-certificates/11450182.pem /etc/ssl/certs/11450182.pem"
	I0819 21:16:36.101460 1361984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11450182.pem
	I0819 21:16:36.105496 1361984 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 20:31 /usr/share/ca-certificates/11450182.pem
	I0819 21:16:36.105564 1361984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11450182.pem
	I0819 21:16:36.113117 1361984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11450182.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 21:16:36.122712 1361984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 21:16:36.132093 1361984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 21:16:36.135726 1361984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I0819 21:16:36.135798 1361984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 21:16:36.143270 1361984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 21:16:36.152684 1361984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1145018.pem && ln -fs /usr/share/ca-certificates/1145018.pem /etc/ssl/certs/1145018.pem"
	I0819 21:16:36.161883 1361984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1145018.pem
	I0819 21:16:36.165733 1361984 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 20:31 /usr/share/ca-certificates/1145018.pem
	I0819 21:16:36.165801 1361984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1145018.pem
	I0819 21:16:36.172715 1361984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1145018.pem /etc/ssl/certs/51391683.0"
	I0819 21:16:36.181903 1361984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 21:16:36.185775 1361984 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 21:16:36.185874 1361984 kubeadm.go:392] StartCluster: {Name:embed-certs-249735 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-249735 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 21:16:36.185957 1361984 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0819 21:16:36.186011 1361984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 21:16:36.230146 1361984 cri.go:89] found id: ""
	I0819 21:16:36.230289 1361984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 21:16:36.239386 1361984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 21:16:36.248681 1361984 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 21:16:36.248745 1361984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 21:16:36.259827 1361984 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 21:16:36.259849 1361984 kubeadm.go:157] found existing configuration files:
	
	I0819 21:16:36.259908 1361984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 21:16:36.268962 1361984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 21:16:36.269033 1361984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 21:16:36.277134 1361984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 21:16:36.286066 1361984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 21:16:36.286170 1361984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 21:16:36.295013 1361984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 21:16:36.304121 1361984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 21:16:36.304209 1361984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 21:16:36.313415 1361984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 21:16:36.322583 1361984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 21:16:36.322647 1361984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 21:16:36.333751 1361984 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 21:16:36.378906 1361984 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 21:16:36.379117 1361984 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 21:16:36.398061 1361984 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 21:16:36.398134 1361984 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0819 21:16:36.398175 1361984 kubeadm.go:310] OS: Linux
	I0819 21:16:36.398224 1361984 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 21:16:36.398292 1361984 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 21:16:36.398341 1361984 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 21:16:36.398391 1361984 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 21:16:36.398446 1361984 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 21:16:36.398500 1361984 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 21:16:36.398547 1361984 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 21:16:36.398596 1361984 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 21:16:36.398646 1361984 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 21:16:36.457112 1361984 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 21:16:36.457222 1361984 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 21:16:36.457314 1361984 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 21:16:36.464754 1361984 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 21:16:36.469307 1361984 out.go:235]   - Generating certificates and keys ...
	I0819 21:16:36.469499 1361984 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 21:16:36.469623 1361984 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 21:16:37.295435 1361984 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 21:16:37.844864 1361984 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 21:16:38.101956 1361984 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 21:16:38.980698 1351451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 21:16:39.000948 1351451 api_server.go:72] duration metric: took 5m58.287822906s to wait for apiserver process to appear ...
	I0819 21:16:39.000980 1351451 api_server.go:88] waiting for apiserver healthz status ...
	I0819 21:16:39.001038 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 21:16:39.001135 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 21:16:39.096959 1351451 cri.go:89] found id: "863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e"
	I0819 21:16:39.096986 1351451 cri.go:89] found id: "ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce"
	I0819 21:16:39.096998 1351451 cri.go:89] found id: ""
	I0819 21:16:39.097007 1351451 logs.go:276] 2 containers: [863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce]
	I0819 21:16:39.097082 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.103146 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.108342 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 21:16:39.108415 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 21:16:39.181901 1351451 cri.go:89] found id: "fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267"
	I0819 21:16:39.181922 1351451 cri.go:89] found id: "de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5"
	I0819 21:16:39.181931 1351451 cri.go:89] found id: ""
	I0819 21:16:39.181942 1351451 logs.go:276] 2 containers: [fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267 de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5]
	I0819 21:16:39.182019 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.187650 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.195541 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 21:16:39.195721 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 21:16:39.279675 1351451 cri.go:89] found id: "1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e"
	I0819 21:16:39.279764 1351451 cri.go:89] found id: "f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657"
	I0819 21:16:39.279788 1351451 cri.go:89] found id: ""
	I0819 21:16:39.279817 1351451 logs.go:276] 2 containers: [1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657]
	I0819 21:16:39.280034 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.285431 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.290629 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 21:16:39.290859 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 21:16:39.371561 1351451 cri.go:89] found id: "c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4"
	I0819 21:16:39.371650 1351451 cri.go:89] found id: "62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a"
	I0819 21:16:39.371676 1351451 cri.go:89] found id: ""
	I0819 21:16:39.371728 1351451 logs.go:276] 2 containers: [c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4 62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a]
	I0819 21:16:39.371851 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.376912 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.383459 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 21:16:39.383627 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 21:16:39.448903 1351451 cri.go:89] found id: "3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a"
	I0819 21:16:39.448982 1351451 cri.go:89] found id: "63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e"
	I0819 21:16:39.449012 1351451 cri.go:89] found id: ""
	I0819 21:16:39.449034 1351451 logs.go:276] 2 containers: [3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a 63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e]
	I0819 21:16:39.449149 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.454509 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.459726 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 21:16:39.459909 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 21:16:39.545859 1351451 cri.go:89] found id: "a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363"
	I0819 21:16:39.545936 1351451 cri.go:89] found id: "7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525"
	I0819 21:16:39.545961 1351451 cri.go:89] found id: ""
	I0819 21:16:39.545981 1351451 logs.go:276] 2 containers: [a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363 7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525]
	I0819 21:16:39.546083 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.561290 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.565853 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 21:16:39.566023 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 21:16:39.630565 1351451 cri.go:89] found id: "2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20"
	I0819 21:16:39.630648 1351451 cri.go:89] found id: "aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e"
	I0819 21:16:39.630667 1351451 cri.go:89] found id: ""
	I0819 21:16:39.630691 1351451 logs.go:276] 2 containers: [2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20 aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e]
	I0819 21:16:39.630799 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.636088 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.640406 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 21:16:39.640539 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 21:16:39.705002 1351451 cri.go:89] found id: "18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1"
	I0819 21:16:39.705082 1351451 cri.go:89] found id: ""
	I0819 21:16:39.705109 1351451 logs.go:276] 1 containers: [18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1]
	I0819 21:16:39.705209 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.713612 1351451 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 21:16:39.713752 1351451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 21:16:39.771883 1351451 cri.go:89] found id: "3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e"
	I0819 21:16:39.771963 1351451 cri.go:89] found id: "556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd"
	I0819 21:16:39.771982 1351451 cri.go:89] found id: ""
	I0819 21:16:39.772007 1351451 logs.go:276] 2 containers: [3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e 556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd]
	I0819 21:16:39.772133 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.777471 1351451 ssh_runner.go:195] Run: which crictl
	I0819 21:16:39.782219 1351451 logs.go:123] Gathering logs for dmesg ...
	I0819 21:16:39.782306 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 21:16:39.803506 1351451 logs.go:123] Gathering logs for kube-apiserver [ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce] ...
	I0819 21:16:39.803589 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce"
	I0819 21:16:39.902538 1351451 logs.go:123] Gathering logs for kube-controller-manager [a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363] ...
	I0819 21:16:39.902629 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363"
	I0819 21:16:39.988902 1351451 logs.go:123] Gathering logs for storage-provisioner [556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd] ...
	I0819 21:16:39.988936 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd"
	I0819 21:16:40.051419 1351451 logs.go:123] Gathering logs for container status ...
	I0819 21:16:40.051502 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 21:16:40.130468 1351451 logs.go:123] Gathering logs for describe nodes ...
	I0819 21:16:40.130553 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 21:16:40.440578 1351451 logs.go:123] Gathering logs for kube-apiserver [863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e] ...
	I0819 21:16:40.440661 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e"
	I0819 21:16:40.552968 1351451 logs.go:123] Gathering logs for etcd [de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5] ...
	I0819 21:16:40.553065 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5"
	I0819 21:16:40.613652 1351451 logs.go:123] Gathering logs for coredns [1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e] ...
	I0819 21:16:40.613737 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e"
	I0819 21:16:40.704428 1351451 logs.go:123] Gathering logs for coredns [f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657] ...
	I0819 21:16:40.704502 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657"
	I0819 21:16:40.762919 1351451 logs.go:123] Gathering logs for storage-provisioner [3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e] ...
	I0819 21:16:40.762995 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e"
	I0819 21:16:40.818047 1351451 logs.go:123] Gathering logs for kubernetes-dashboard [18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1] ...
	I0819 21:16:40.818117 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1"
	I0819 21:16:40.872667 1351451 logs.go:123] Gathering logs for kube-scheduler [62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a] ...
	I0819 21:16:40.872739 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a"
	I0819 21:16:40.943174 1351451 logs.go:123] Gathering logs for kube-proxy [3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a] ...
	I0819 21:16:40.943243 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a"
	I0819 21:16:41.028487 1351451 logs.go:123] Gathering logs for kube-proxy [63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e] ...
	I0819 21:16:41.028563 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e"
	I0819 21:16:41.099036 1351451 logs.go:123] Gathering logs for kube-controller-manager [7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525] ...
	I0819 21:16:41.099118 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525"
	I0819 21:16:41.217328 1351451 logs.go:123] Gathering logs for kindnet [2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20] ...
	I0819 21:16:41.217417 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20"
	I0819 21:16:41.421061 1351451 logs.go:123] Gathering logs for kindnet [aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e] ...
	I0819 21:16:41.421169 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e"
	I0819 21:16:41.500448 1351451 logs.go:123] Gathering logs for kubelet ...
	I0819 21:16:41.500533 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 21:16:41.586682 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.621751     665 reflector.go:138] object-"kube-system"/"kindnet-token-xptkw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xptkw" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:41.586985 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622026     665 reflector.go:138] object-"kube-system"/"coredns-token-jj86v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-jj86v" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:41.587223 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622171     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:41.587487 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622316     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-ssctg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-ssctg" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:41.587731 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.622453     665 reflector.go:138] object-"default"/"default-token-vbtl7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-vbtl7" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:41.587965 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.671400     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:41.588235 1351451 logs.go:138] Found kubelet problem: Aug 19 21:10:57 old-k8s-version-127648 kubelet[665]: E0819 21:10:57.671660     665 reflector.go:138] object-"kube-system"/"metrics-server-token-x764s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-x764s" is forbidden: User "system:node:old-k8s-version-127648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-127648' and this object
	W0819 21:16:41.597320 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:00 old-k8s-version-127648 kubelet[665]: E0819 21:11:00.574230     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:41.597559 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:00 old-k8s-version-127648 kubelet[665]: E0819 21:11:00.789676     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.600560 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:15 old-k8s-version-127648 kubelet[665]: E0819 21:11:15.561064     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:41.602849 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:22 old-k8s-version-127648 kubelet[665]: E0819 21:11:22.882357     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.603229 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:23 old-k8s-version-127648 kubelet[665]: E0819 21:11:23.880032     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.603468 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:27 old-k8s-version-127648 kubelet[665]: E0819 21:11:27.549335     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.604309 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:30 old-k8s-version-127648 kubelet[665]: E0819 21:11:30.903604     665 pod_workers.go:191] Error syncing pod 74e4d116-4e4e-4dc5-af07-3013282e840a ("storage-provisioner_kube-system(74e4d116-4e4e-4dc5-af07-3013282e840a)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(74e4d116-4e4e-4dc5-af07-3013282e840a)"
	W0819 21:16:41.604680 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:31 old-k8s-version-127648 kubelet[665]: E0819 21:11:31.736669     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.607638 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:41 old-k8s-version-127648 kubelet[665]: E0819 21:11:41.559000     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:41.608385 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:45 old-k8s-version-127648 kubelet[665]: E0819 21:11:45.947294     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.608989 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:51 old-k8s-version-127648 kubelet[665]: E0819 21:11:51.747256     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.609218 1351451 logs.go:138] Found kubelet problem: Aug 19 21:11:54 old-k8s-version-127648 kubelet[665]: E0819 21:11:54.549415     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.609955 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:06 old-k8s-version-127648 kubelet[665]: E0819 21:12:06.549632     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.610632 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:08 old-k8s-version-127648 kubelet[665]: E0819 21:12:08.101007     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.611019 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:11 old-k8s-version-127648 kubelet[665]: E0819 21:12:11.736495     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.611240 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:18 old-k8s-version-127648 kubelet[665]: E0819 21:12:18.550570     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.611614 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:22 old-k8s-version-127648 kubelet[665]: E0819 21:12:22.548984     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.614201 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:29 old-k8s-version-127648 kubelet[665]: E0819 21:12:29.557879     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:41.614589 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:35 old-k8s-version-127648 kubelet[665]: E0819 21:12:35.549004     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.614826 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:41 old-k8s-version-127648 kubelet[665]: E0819 21:12:41.549476     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.615210 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:47 old-k8s-version-127648 kubelet[665]: E0819 21:12:47.548954     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.615441 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:56 old-k8s-version-127648 kubelet[665]: E0819 21:12:56.550012     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.616091 1351451 logs.go:138] Found kubelet problem: Aug 19 21:12:59 old-k8s-version-127648 kubelet[665]: E0819 21:12:59.243984     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.616472 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:01 old-k8s-version-127648 kubelet[665]: E0819 21:13:01.736702     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.616702 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:08 old-k8s-version-127648 kubelet[665]: E0819 21:13:08.550071     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.618906 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:15 old-k8s-version-127648 kubelet[665]: E0819 21:13:15.549079     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.619148 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:21 old-k8s-version-127648 kubelet[665]: E0819 21:13:21.549597     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.619523 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:27 old-k8s-version-127648 kubelet[665]: E0819 21:13:27.549537     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.619742 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:36 old-k8s-version-127648 kubelet[665]: E0819 21:13:36.549526     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.620148 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:39 old-k8s-version-127648 kubelet[665]: E0819 21:13:39.549198     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.622878 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:50 old-k8s-version-127648 kubelet[665]: E0819 21:13:50.557660     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 21:16:41.623251 1351451 logs.go:138] Found kubelet problem: Aug 19 21:13:52 old-k8s-version-127648 kubelet[665]: E0819 21:13:52.549116     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.623473 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:01 old-k8s-version-127648 kubelet[665]: E0819 21:14:01.549487     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.623831 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:06 old-k8s-version-127648 kubelet[665]: E0819 21:14:06.548964     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.624043 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:16 old-k8s-version-127648 kubelet[665]: E0819 21:14:16.553951     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.624685 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:20 old-k8s-version-127648 kubelet[665]: E0819 21:14:20.469865     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.625055 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:21 old-k8s-version-127648 kubelet[665]: E0819 21:14:21.736453     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.625267 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:31 old-k8s-version-127648 kubelet[665]: E0819 21:14:31.549363     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.625624 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:36 old-k8s-version-127648 kubelet[665]: E0819 21:14:36.549968     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.625840 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:46 old-k8s-version-127648 kubelet[665]: E0819 21:14:46.549614     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.626210 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:48 old-k8s-version-127648 kubelet[665]: E0819 21:14:48.551815     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.626434 1351451 logs.go:138] Found kubelet problem: Aug 19 21:14:57 old-k8s-version-127648 kubelet[665]: E0819 21:14:57.549314     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.626812 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:03 old-k8s-version-127648 kubelet[665]: E0819 21:15:03.548948     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.627047 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:08 old-k8s-version-127648 kubelet[665]: E0819 21:15:08.550121     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.627407 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:18 old-k8s-version-127648 kubelet[665]: E0819 21:15:18.549558     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.627629 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:19 old-k8s-version-127648 kubelet[665]: E0819 21:15:19.549582     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.627992 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:31 old-k8s-version-127648 kubelet[665]: E0819 21:15:31.549167     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.628208 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:34 old-k8s-version-127648 kubelet[665]: E0819 21:15:34.549642     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.628441 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:46 old-k8s-version-127648 kubelet[665]: E0819 21:15:46.549479     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.628833 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:46 old-k8s-version-127648 kubelet[665]: E0819 21:15:46.551366     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.632353 1351451 logs.go:138] Found kubelet problem: Aug 19 21:15:59 old-k8s-version-127648 kubelet[665]: E0819 21:15:59.549085     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.632616 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:00 old-k8s-version-127648 kubelet[665]: E0819 21:16:00.554355     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.632858 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.549495     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.633216 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.550809     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.633561 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:28 old-k8s-version-127648 kubelet[665]: E0819 21:16:28.551255     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.633784 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:28 old-k8s-version-127648 kubelet[665]: E0819 21:16:28.549823     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.636275 1351451 logs.go:138] Found kubelet problem: Aug 19 21:16:39 old-k8s-version-127648 kubelet[665]: E0819 21:16:39.556947     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	I0819 21:16:41.636305 1351451 logs.go:123] Gathering logs for etcd [fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267] ...
	I0819 21:16:41.636337 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267"
	I0819 21:16:41.736897 1351451 logs.go:123] Gathering logs for kube-scheduler [c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4] ...
	I0819 21:16:41.736971 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4"
	I0819 21:16:39.003265 1361984 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 21:16:39.805980 1361984 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 21:16:39.806294 1361984 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-249735 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0819 21:16:40.260154 1361984 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 21:16:40.260471 1361984 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-249735 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0819 21:16:41.300185 1361984 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 21:16:42.039736 1361984 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 21:16:42.510620 1361984 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 21:16:42.510701 1361984 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 21:16:43.779577 1361984 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 21:16:44.370712 1361984 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 21:16:44.612011 1361984 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 21:16:45.387392 1361984 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 21:16:45.648944 1361984 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 21:16:45.649601 1361984 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 21:16:45.652620 1361984 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 21:16:41.824699 1351451 logs.go:123] Gathering logs for containerd ...
	I0819 21:16:41.824726 1351451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 21:16:41.905377 1351451 out.go:358] Setting ErrFile to fd 2...
	I0819 21:16:41.905452 1351451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 21:16:41.905554 1351451 out.go:270] X Problems detected in kubelet:
	W0819 21:16:41.905600 1351451 out.go:270]   Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.549495     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.905771 1351451 out.go:270]   Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.550809     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.905806 1351451 out.go:270]   Aug 19 21:16:28 old-k8s-version-127648 kubelet[665]: E0819 21:16:28.551255     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 21:16:41.905859 1351451 out.go:270]   Aug 19 21:16:28 old-k8s-version-127648 kubelet[665]: E0819 21:16:28.549823     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	W0819 21:16:41.905906 1351451 out.go:270]   Aug 19 21:16:39 old-k8s-version-127648 kubelet[665]: E0819 21:16:39.556947     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	I0819 21:16:41.905968 1351451 out.go:358] Setting ErrFile to fd 2...
	I0819 21:16:41.905993 1351451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 21:16:45.654802 1361984 out.go:235]   - Booting up control plane ...
	I0819 21:16:45.654895 1361984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 21:16:45.654976 1361984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 21:16:45.655045 1361984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 21:16:45.668882 1361984 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 21:16:45.676020 1361984 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 21:16:45.676448 1361984 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 21:16:45.783003 1361984 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 21:16:45.783121 1361984 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 21:16:47.785318 1361984 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.002328001s
	I0819 21:16:47.785419 1361984 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 21:16:51.906826 1351451 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0819 21:16:51.922052 1351451 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0819 21:16:51.923612 1351451 out.go:201] 
	W0819 21:16:51.924862 1351451 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0819 21:16:51.925047 1351451 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0819 21:16:51.925129 1351451 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0819 21:16:51.925214 1351451 out.go:270] * 
	W0819 21:16:51.926428 1351451 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 21:16:51.927283 1351451 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	90ffd31b3114d       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   f9e7ec35c46b5       dashboard-metrics-scraper-8d5bb5db8-lrvtv
	3b3340693ab7f       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   a7fea58283917       storage-provisioner
	18b6c4b9755bc       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   63d9a43425611       kubernetes-dashboard-cd95d586-45zvk
	3eaea7a4d5646       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   3c703ae455332       kube-proxy-l9jdt
	1d52bf16b4b31       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   55a0b3195bf4c       busybox
	556145ddf3521       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   a7fea58283917       storage-provisioner
	1c4204d908305       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   ef491a1d3b3fe       coredns-74ff55c5b-fj4wf
	2b97b1c764dc6       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   b0e5da863d938       kindnet-bmcmx
	c4fa68e3e489b       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   00655c236d420       kube-scheduler-old-k8s-version-127648
	fe816949c9d21       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   0d61b315278df       etcd-old-k8s-version-127648
	863a642d161e0       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   6d1c6c7613e2a       kube-apiserver-old-k8s-version-127648
	a93b5c5a814c8       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   8abb8077c6ed2       kube-controller-manager-old-k8s-version-127648
	cd480a382e3b5       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   f2c6f94cae4b1       busybox
	f881846bec02a       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   2af76c44ae9bc       coredns-74ff55c5b-fj4wf
	aa6e1bccf3b9e       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   6df4aa804107e       kindnet-bmcmx
	63fbaf1d278cf       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   62f7f6ea5d35c       kube-proxy-l9jdt
	de6ad75777dfe       05b738aa1bc63       9 minutes ago       Exited              etcd                        0                   818fe65da9741       etcd-old-k8s-version-127648
	ddd7fbd4fc4b1       2c08bbbc02d3a       9 minutes ago       Exited              kube-apiserver              0                   0b5486c0d7170       kube-apiserver-old-k8s-version-127648
	62242d9fb5d71       e7605f88f17d6       9 minutes ago       Exited              kube-scheduler              0                   d43ca1ea08068       kube-scheduler-old-k8s-version-127648
	7708e0950e37b       1df8a2b116bd1       9 minutes ago       Exited              kube-controller-manager     0                   b11647aff870d       kube-controller-manager-old-k8s-version-127648
	
	
	==> containerd <==
	Aug 19 21:12:58 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:12:58.576368174Z" level=info msg="CreateContainer within sandbox \"f9e7ec35c46b59d7bfb95d83e73bc014ac3da08d8d937d44497be6926a5b71f4\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"98808d57df0038ffeaa125bfff2b878ccd2ab753519c4e3d7dc0fdfb695cce72\""
	Aug 19 21:12:58 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:12:58.577098347Z" level=info msg="StartContainer for \"98808d57df0038ffeaa125bfff2b878ccd2ab753519c4e3d7dc0fdfb695cce72\""
	Aug 19 21:12:58 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:12:58.642349661Z" level=info msg="StartContainer for \"98808d57df0038ffeaa125bfff2b878ccd2ab753519c4e3d7dc0fdfb695cce72\" returns successfully"
	Aug 19 21:12:58 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:12:58.678189633Z" level=info msg="shim disconnected" id=98808d57df0038ffeaa125bfff2b878ccd2ab753519c4e3d7dc0fdfb695cce72 namespace=k8s.io
	Aug 19 21:12:58 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:12:58.678260015Z" level=warning msg="cleaning up after shim disconnected" id=98808d57df0038ffeaa125bfff2b878ccd2ab753519c4e3d7dc0fdfb695cce72 namespace=k8s.io
	Aug 19 21:12:58 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:12:58.678269647Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 19 21:12:59 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:12:59.264012258Z" level=info msg="RemoveContainer for \"9c9f120d3b8c5c3120bc430c1904f7faf3099f6d257b5ad287df21e88ebb8e1d\""
	Aug 19 21:12:59 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:12:59.270166965Z" level=info msg="RemoveContainer for \"9c9f120d3b8c5c3120bc430c1904f7faf3099f6d257b5ad287df21e88ebb8e1d\" returns successfully"
	Aug 19 21:13:50 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:13:50.550335788Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 21:13:50 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:13:50.555324295Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Aug 19 21:13:50 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:13:50.557219199Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 19 21:13:50 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:13:50.557311349Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Aug 19 21:14:19 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:14:19.550942549Z" level=info msg="CreateContainer within sandbox \"f9e7ec35c46b59d7bfb95d83e73bc014ac3da08d8d937d44497be6926a5b71f4\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Aug 19 21:14:19 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:14:19.570508682Z" level=info msg="CreateContainer within sandbox \"f9e7ec35c46b59d7bfb95d83e73bc014ac3da08d8d937d44497be6926a5b71f4\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"90ffd31b3114d75eda8948c5e9153728e6ff7db21822b6f58bafd791e62e15bc\""
	Aug 19 21:14:19 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:14:19.571042133Z" level=info msg="StartContainer for \"90ffd31b3114d75eda8948c5e9153728e6ff7db21822b6f58bafd791e62e15bc\""
	Aug 19 21:14:19 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:14:19.653774379Z" level=info msg="StartContainer for \"90ffd31b3114d75eda8948c5e9153728e6ff7db21822b6f58bafd791e62e15bc\" returns successfully"
	Aug 19 21:14:19 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:14:19.687092401Z" level=info msg="shim disconnected" id=90ffd31b3114d75eda8948c5e9153728e6ff7db21822b6f58bafd791e62e15bc namespace=k8s.io
	Aug 19 21:14:19 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:14:19.687155677Z" level=warning msg="cleaning up after shim disconnected" id=90ffd31b3114d75eda8948c5e9153728e6ff7db21822b6f58bafd791e62e15bc namespace=k8s.io
	Aug 19 21:14:19 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:14:19.687167156Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 19 21:14:20 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:14:20.475867088Z" level=info msg="RemoveContainer for \"98808d57df0038ffeaa125bfff2b878ccd2ab753519c4e3d7dc0fdfb695cce72\""
	Aug 19 21:14:20 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:14:20.492350817Z" level=info msg="RemoveContainer for \"98808d57df0038ffeaa125bfff2b878ccd2ab753519c4e3d7dc0fdfb695cce72\" returns successfully"
	Aug 19 21:16:39 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:16:39.550330462Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 21:16:39 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:16:39.555270100Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Aug 19 21:16:39 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:16:39.556303782Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 19 21:16:39 old-k8s-version-127648 containerd[567]: time="2024-08-19T21:16:39.556425510Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [1c4204d908305e49b9bc8e0a713f3a60c3701a9edfadbf50bdaf6f12d055498e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:57858 - 31612 "HINFO IN 5207757024897347302.6221884481298753466. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019630763s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0819 21:11:29.966002       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-19 21:10:59.965352446 +0000 UTC m=+0.027452277) (total time: 30.00054843s):
	Trace[2019727887]: [30.00054843s] [30.00054843s] END
	E0819 21:11:29.966040       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0819 21:11:29.966365       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-19 21:10:59.965925962 +0000 UTC m=+0.028025802) (total time: 30.000416109s):
	Trace[939984059]: [30.000416109s] [30.000416109s] END
	E0819 21:11:29.966427       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0819 21:11:29.966534       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-19 21:10:59.966191211 +0000 UTC m=+0.028291051) (total time: 30.000331705s):
	Trace[911902081]: [30.000331705s] [30.000331705s] END
	E0819 21:11:29.966618       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [f881846bec02adc45467aa1bcde798e06c205d2f213869c24d08f2277be24657] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:35444 - 40715 "HINFO IN 4304486271716434029.2850386008944031993. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.046241969s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-127648
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-127648
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7253360125032c7e2214e25ff4b5c894ae5844e8
	                    minikube.k8s.io/name=old-k8s-version-127648
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T21_08_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 21:07:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-127648
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 21:16:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 21:11:58 +0000   Mon, 19 Aug 2024 21:07:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 21:11:58 +0000   Mon, 19 Aug 2024 21:07:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 21:11:58 +0000   Mon, 19 Aug 2024 21:07:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 21:11:58 +0000   Mon, 19 Aug 2024 21:08:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-127648
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a06a41ecef14bd78834cf9d4770a35f
	  System UUID:                cd5a7492-44e3-4e58-8a1a-93535ca2aa9b
	  Boot ID:                    b7846bbc-2ca5-4e44-8ea6-94e5c03d25fd
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  kube-system                 coredns-74ff55c5b-fj4wf                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m39s
	  kube-system                 etcd-old-k8s-version-127648                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m46s
	  kube-system                 kindnet-bmcmx                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m39s
	  kube-system                 kube-apiserver-old-k8s-version-127648             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 kube-controller-manager-old-k8s-version-127648    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 kube-proxy-l9jdt                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-scheduler-old-k8s-version-127648             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 metrics-server-9975d5f86-4glzw                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m36s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-lrvtv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-45zvk               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 9m6s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m6s (x4 over 9m6s)  kubelet     Node old-k8s-version-127648 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m6s (x3 over 9m6s)  kubelet     Node old-k8s-version-127648 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m6s (x3 over 9m6s)  kubelet     Node old-k8s-version-127648 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m6s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 8m46s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m46s                kubelet     Node old-k8s-version-127648 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m46s                kubelet     Node old-k8s-version-127648 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m46s                kubelet     Node old-k8s-version-127648 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m46s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m39s                kubelet     Node old-k8s-version-127648 status is now: NodeReady
	  Normal  Starting                 8m37s                kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m6s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m6s (x7 over 6m6s)  kubelet     Node old-k8s-version-127648 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x8 over 6m6s)  kubelet     Node old-k8s-version-127648 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x8 over 6m6s)  kubelet     Node old-k8s-version-127648 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m6s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m53s                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [de6ad75777dfe1d5d57ea2a85334e48c13ce3eefa7d1e083b8c7d11680d27ee5] <==
	raft2024/08/19 21:07:50 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/08/19 21:07:50 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/08/19 21:07:50 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/08/19 21:07:50 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-08-19 21:07:50.560309 I | etcdserver: setting up the initial cluster version to 3.4
	2024-08-19 21:07:50.562948 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-08-19 21:07:50.563122 I | etcdserver/api: enabled capabilities for version 3.4
	2024-08-19 21:07:50.563212 I | etcdserver: published {Name:old-k8s-version-127648 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-08-19 21:07:50.563292 I | embed: ready to serve client requests
	2024-08-19 21:07:50.566952 I | embed: serving client requests on 127.0.0.1:2379
	2024-08-19 21:07:50.567181 I | embed: ready to serve client requests
	2024-08-19 21:07:50.579663 I | embed: serving client requests on 192.168.85.2:2379
	2024-08-19 21:08:14.784950 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:08:23.479691 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:08:33.479699 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:08:43.479874 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:08:53.479690 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:09:03.479948 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:09:13.479716 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:09:23.479750 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:09:33.479570 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:09:43.479749 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:09:53.479893 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:10:03.479694 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:10:13.479685 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [fe816949c9d21133dfa290d66d4a672f9934aa3758c37c6b52633143ce54f267] <==
	2024-08-19 21:12:44.993663 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:12:54.993582 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:13:04.993517 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:13:14.993569 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:13:24.993550 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:13:34.993455 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:13:44.993713 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:13:54.993526 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:14:04.993742 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:14:14.993527 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:14:24.993586 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:14:34.993550 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:14:44.993992 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:14:54.993504 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:15:04.993549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:15:14.993538 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:15:24.993813 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:15:34.993557 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:15:44.993650 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:15:54.993671 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:16:04.993600 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:16:14.993932 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:16:24.993727 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:16:34.993729 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 21:16:44.997223 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 21:16:54 up  4:59,  0 users,  load average: 1.75, 1.71, 2.31
	Linux old-k8s-version-127648 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [2b97b1c764dc6a9ef65fd39fe731d38828a1720efb0d703155aa021f2e1cfa20] <==
	I0819 21:15:30.400212       1 main.go:299] handling current node
	I0819 21:15:40.399149       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:15:40.399187       1 main.go:299] handling current node
	W0819 21:15:45.601116       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 21:15:45.601178       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 21:15:50.400010       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:15:50.400044       1 main.go:299] handling current node
	I0819 21:16:00.399086       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:16:00.399126       1 main.go:299] handling current node
	I0819 21:16:10.399243       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:16:10.399279       1 main.go:299] handling current node
	W0819 21:16:12.794344       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 21:16:12.794394       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 21:16:20.399644       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:16:20.399699       1 main.go:299] handling current node
	W0819 21:16:25.229635       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 21:16:25.229858       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 21:16:30.399966       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:16:30.400072       1 main.go:299] handling current node
	W0819 21:16:34.848289       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 21:16:34.848332       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 21:16:40.402808       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:16:40.403030       1 main.go:299] handling current node
	I0819 21:16:50.399263       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:16:50.399306       1 main.go:299] handling current node
	
	
	==> kindnet [aa6e1bccf3b9e02915f7d77581c43d4973f5450dee4cbe799ab617b0d312978e] <==
	E0819 21:09:01.150139       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 21:09:08.918239       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:09:08.918279       1 main.go:299] handling current node
	I0819 21:09:18.917625       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:09:18.917668       1 main.go:299] handling current node
	W0819 21:09:21.817271       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 21:09:21.817307       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0819 21:09:27.941506       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 21:09:27.941543       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 21:09:28.918073       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:09:28.918147       1 main.go:299] handling current node
	I0819 21:09:38.917616       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:09:38.917658       1 main.go:299] handling current node
	W0819 21:09:48.290292       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 21:09:48.290350       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 21:09:48.918124       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:09:48.918172       1 main.go:299] handling current node
	I0819 21:09:58.917787       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:09:58.917821       1 main.go:299] handling current node
	W0819 21:10:01.006124       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 21:10:01.006169       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 21:10:08.917471       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 21:10:08.917509       1 main.go:299] handling current node
	W0819 21:10:10.515149       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 21:10:10.515183       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	
	
	==> kube-apiserver [863a642d161e02c4a9c3c27a6f9246c0a91afa231d64297aa4c31f2bf085af9e] <==
	I0819 21:13:35.245547       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 21:13:35.245578       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0819 21:14:00.445033       1 handler_proxy.go:102] no RequestInfo found in the context
	E0819 21:14:00.445309       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0819 21:14:00.445325       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 21:14:19.624367       1 client.go:360] parsed scheme: "passthrough"
	I0819 21:14:19.624409       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 21:14:19.624433       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 21:14:53.796882       1 client.go:360] parsed scheme: "passthrough"
	I0819 21:14:53.796929       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 21:14:53.796938       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 21:15:27.257877       1 client.go:360] parsed scheme: "passthrough"
	I0819 21:15:27.257921       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 21:15:27.257931       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0819 21:15:58.650104       1 handler_proxy.go:102] no RequestInfo found in the context
	E0819 21:15:58.650423       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0819 21:15:58.650443       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 21:16:02.875576       1 client.go:360] parsed scheme: "passthrough"
	I0819 21:16:02.875640       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 21:16:02.875650       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 21:16:41.242299       1 client.go:360] parsed scheme: "passthrough"
	I0819 21:16:41.242355       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 21:16:41.242365       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [ddd7fbd4fc4b1fc85931d27f22ce6544aefafe553e7ad058a444cb1240d045ce] <==
	I0819 21:07:57.652777       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0819 21:07:57.653544       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0819 21:07:58.126088       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 21:07:58.171557       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0819 21:07:58.269714       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0819 21:07:58.270694       1 controller.go:606] quota admission added evaluator for: endpoints
	I0819 21:07:58.275353       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 21:07:59.313026       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0819 21:07:59.951199       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0819 21:08:00.173007       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0819 21:08:08.596742       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 21:08:15.346859       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0819 21:08:15.362639       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0819 21:08:21.396840       1 client.go:360] parsed scheme: "passthrough"
	I0819 21:08:21.396883       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 21:08:21.396891       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 21:09:04.689686       1 client.go:360] parsed scheme: "passthrough"
	I0819 21:09:04.689726       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 21:09:04.689775       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 21:09:39.845014       1 client.go:360] parsed scheme: "passthrough"
	I0819 21:09:39.845062       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 21:09:39.845071       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 21:10:14.353885       1 client.go:360] parsed scheme: "passthrough"
	I0819 21:10:14.353950       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 21:10:14.353960       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [7708e0950e37b20add16fe5d8f9b34d5f3c928b13167f2ce6017087538b63525] <==
	I0819 21:08:15.332668       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0819 21:08:15.332941       1 event.go:291] "Event occurred" object="old-k8s-version-127648" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-127648 event: Registered Node old-k8s-version-127648 in Controller"
	I0819 21:08:15.355063       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0819 21:08:15.414073       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-l9jdt"
	I0819 21:08:15.438282       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0819 21:08:15.441232       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bmcmx"
	I0819 21:08:15.439391       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0819 21:08:15.471004       1 shared_informer.go:247] Caches are synced for resource quota 
	I0819 21:08:15.480463       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-24vqz"
	I0819 21:08:15.497907       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-fj4wf"
	I0819 21:08:15.505849       1 shared_informer.go:247] Caches are synced for attach detach 
	I0819 21:08:15.509714       1 shared_informer.go:247] Caches are synced for resource quota 
	I0819 21:08:15.644863       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0819 21:08:15.945082       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0819 21:08:15.954290       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0819 21:08:15.954328       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0819 21:08:16.680448       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0819 21:08:16.711889       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-24vqz"
	I0819 21:08:20.332640       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0819 21:10:17.736776       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0819 21:10:17.824803       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0819 21:10:17.849379       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0819 21:10:18.030693       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E0819 21:10:18.051546       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0819 21:10:18.882772       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-4glzw"
	
	
	==> kube-controller-manager [a93b5c5a814c88adf2a58e19336d5018f346f39f3866b52c57d7df973b978363] <==
	E0819 21:12:47.085409       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 21:12:55.052911       1 request.go:655] Throttling request took 1.043835654s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0819 21:12:55.904561       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 21:13:17.590688       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 21:13:27.555145       1 request.go:655] Throttling request took 1.048227965s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0819 21:13:28.406616       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 21:13:48.092653       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 21:14:00.057261       1 request.go:655] Throttling request took 1.048423718s, request: GET:https://192.168.85.2:8443/apis/policy/v1beta1?timeout=32s
	W0819 21:14:00.908378       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 21:14:18.594429       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 21:14:32.558917       1 request.go:655] Throttling request took 1.048281502s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0819 21:14:33.410396       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 21:14:49.095238       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 21:15:05.060848       1 request.go:655] Throttling request took 1.048208338s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0819 21:15:05.912356       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 21:15:19.602413       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 21:15:37.562839       1 request.go:655] Throttling request took 1.04850511s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0819 21:15:38.414352       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 21:15:50.104492       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 21:16:10.064956       1 request.go:655] Throttling request took 1.048215287s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
	W0819 21:16:10.916497       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 21:16:20.644621       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 21:16:42.566859       1 request.go:655] Throttling request took 1.048380714s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0819 21:16:43.418434       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 21:16:51.146905       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [3eaea7a4d564682eae94a2b48007416b88761d90abe2579a6ba16eff34bdba9a] <==
	I0819 21:11:01.839659       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0819 21:11:01.839950       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0819 21:11:01.863917       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0819 21:11:01.864135       1 server_others.go:185] Using iptables Proxier.
	I0819 21:11:01.864679       1 server.go:650] Version: v1.20.0
	I0819 21:11:01.865602       1 config.go:224] Starting endpoint slice config controller
	I0819 21:11:01.865772       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0819 21:11:01.865948       1 config.go:315] Starting service config controller
	I0819 21:11:01.866042       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0819 21:11:01.966357       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0819 21:11:01.966661       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [63fbaf1d278cf5491ca7dd5ad4fdb07c3e8a90f6f4dfbcf5e073c12bf517db0e] <==
	I0819 21:08:17.635555       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0819 21:08:17.635641       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0819 21:08:17.659167       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0819 21:08:17.659250       1 server_others.go:185] Using iptables Proxier.
	I0819 21:08:17.659451       1 server.go:650] Version: v1.20.0
	I0819 21:08:17.660277       1 config.go:315] Starting service config controller
	I0819 21:08:17.660286       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0819 21:08:17.660303       1 config.go:224] Starting endpoint slice config controller
	I0819 21:08:17.660307       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0819 21:08:17.761449       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0819 21:08:17.761523       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [62242d9fb5d7108b3cbff9c68fdf6693fa30bbe1a57a81165d6adf95d2f9a15a] <==
	I0819 21:07:50.855590       1 serving.go:331] Generated self-signed cert in-memory
	W0819 21:07:56.792780       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 21:07:56.792819       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 21:07:56.792829       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 21:07:56.792834       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 21:07:56.884146       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0819 21:07:56.887113       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 21:07:56.887728       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 21:07:56.887890       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0819 21:07:56.895115       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 21:07:56.898979       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 21:07:56.899365       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 21:07:56.900932       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 21:07:56.901742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 21:07:56.902004       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 21:07:56.903337       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 21:07:56.904890       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 21:07:56.905859       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 21:07:56.910629       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 21:07:56.911047       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 21:07:56.911796       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 21:07:57.733164       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 21:07:57.775931       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 21:07:57.807718       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0819 21:07:59.887934       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [c4fa68e3e489b8ed0f15e0a99bcf04281f96ae3328ebc705a20531880e4d77b4] <==
	I0819 21:10:51.210123       1 serving.go:331] Generated self-signed cert in-memory
	W0819 21:10:57.474711       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 21:10:57.474748       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 21:10:57.474770       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 21:10:57.474776       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 21:10:57.753548       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 21:10:57.759137       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0819 21:10:57.759210       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 21:10:57.759231       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0819 21:10:57.864076       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Aug 19 21:15:18 old-k8s-version-127648 kubelet[665]: I0819 21:15:18.549161     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 90ffd31b3114d75eda8948c5e9153728e6ff7db21822b6f58bafd791e62e15bc
	Aug 19 21:15:18 old-k8s-version-127648 kubelet[665]: E0819 21:15:18.549558     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	Aug 19 21:15:19 old-k8s-version-127648 kubelet[665]: E0819 21:15:19.549582     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 21:15:31 old-k8s-version-127648 kubelet[665]: I0819 21:15:31.548813     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 90ffd31b3114d75eda8948c5e9153728e6ff7db21822b6f58bafd791e62e15bc
	Aug 19 21:15:31 old-k8s-version-127648 kubelet[665]: E0819 21:15:31.549167     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	Aug 19 21:15:34 old-k8s-version-127648 kubelet[665]: E0819 21:15:34.549642     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 21:15:46 old-k8s-version-127648 kubelet[665]: E0819 21:15:46.549479     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 21:15:46 old-k8s-version-127648 kubelet[665]: I0819 21:15:46.550894     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 90ffd31b3114d75eda8948c5e9153728e6ff7db21822b6f58bafd791e62e15bc
	Aug 19 21:15:46 old-k8s-version-127648 kubelet[665]: E0819 21:15:46.551366     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	Aug 19 21:15:59 old-k8s-version-127648 kubelet[665]: I0819 21:15:59.548729     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 90ffd31b3114d75eda8948c5e9153728e6ff7db21822b6f58bafd791e62e15bc
	Aug 19 21:15:59 old-k8s-version-127648 kubelet[665]: E0819 21:15:59.549085     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	Aug 19 21:16:00 old-k8s-version-127648 kubelet[665]: E0819 21:16:00.554355     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.549495     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: I0819 21:16:14.550464     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 90ffd31b3114d75eda8948c5e9153728e6ff7db21822b6f58bafd791e62e15bc
	Aug 19 21:16:14 old-k8s-version-127648 kubelet[665]: E0819 21:16:14.550809     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	Aug 19 21:16:28 old-k8s-version-127648 kubelet[665]: I0819 21:16:28.549487     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 90ffd31b3114d75eda8948c5e9153728e6ff7db21822b6f58bafd791e62e15bc
	Aug 19 21:16:28 old-k8s-version-127648 kubelet[665]: E0819 21:16:28.551255     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 21:16:28 old-k8s-version-127648 kubelet[665]: E0819 21:16:28.549823     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	Aug 19 21:16:39 old-k8s-version-127648 kubelet[665]: E0819 21:16:39.556742     665 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 19 21:16:39 old-k8s-version-127648 kubelet[665]: E0819 21:16:39.556786     665 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 19 21:16:39 old-k8s-version-127648 kubelet[665]: E0819 21:16:39.556913     665 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-x764s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-4glzw_kube-system(611ff47
f-2896-429d-b32c-b9fbf62a64f3): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 19 21:16:39 old-k8s-version-127648 kubelet[665]: E0819 21:16:39.556947     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 19 21:16:43 old-k8s-version-127648 kubelet[665]: I0819 21:16:43.548646     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 90ffd31b3114d75eda8948c5e9153728e6ff7db21822b6f58bafd791e62e15bc
	Aug 19 21:16:43 old-k8s-version-127648 kubelet[665]: E0819 21:16:43.549476     665 pod_workers.go:191] Error syncing pod 838001cf-190e-416b-a50f-c260a8b8a946 ("dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lrvtv_kubernetes-dashboard(838001cf-190e-416b-a50f-c260a8b8a946)"
	Aug 19 21:16:51 old-k8s-version-127648 kubelet[665]: E0819 21:16:51.555869     665 pod_workers.go:191] Error syncing pod 611ff47f-2896-429d-b32c-b9fbf62a64f3 ("metrics-server-9975d5f86-4glzw_kube-system(611ff47f-2896-429d-b32c-b9fbf62a64f3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [18b6c4b9755bc4f8f489bd6d79488572db886134c98fa78b6b1ebbbaf13e78a1] <==
	2024/08/19 21:11:24 Using namespace: kubernetes-dashboard
	2024/08/19 21:11:24 Using in-cluster config to connect to apiserver
	2024/08/19 21:11:24 Using secret token for csrf signing
	2024/08/19 21:11:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/19 21:11:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/19 21:11:24 Successful initial request to the apiserver, version: v1.20.0
	2024/08/19 21:11:24 Generating JWE encryption key
	2024/08/19 21:11:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/19 21:11:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/19 21:11:24 Initializing JWE encryption key from synchronized object
	2024/08/19 21:11:24 Creating in-cluster Sidecar client
	2024/08/19 21:11:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 21:11:24 Serving insecurely on HTTP port: 9090
	2024/08/19 21:11:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 21:12:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 21:12:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 21:13:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 21:13:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 21:14:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 21:14:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 21:15:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 21:15:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 21:16:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 21:16:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 21:11:24 Starting overwatch
	
	
	==> storage-provisioner [3b3340693ab7f6759c32d8665585f8d968c8d552cf6180284c4eaeed124e3d8e] <==
	I0819 21:11:46.856654       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 21:11:46.910086       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 21:11:46.910303       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 21:12:04.420366       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 21:12:04.420600       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-127648_d54e4fd2-f404-4b49-9985-07c6db5272fd!
	I0819 21:12:04.421779       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"75385448-a290-4473-9ab6-ed14a3aaa903", APIVersion:"v1", ResourceVersion:"882", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-127648_d54e4fd2-f404-4b49-9985-07c6db5272fd became leader
	I0819 21:12:04.520737       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-127648_d54e4fd2-f404-4b49-9985-07c6db5272fd!
	
	
	==> storage-provisioner [556145ddf35210e0aed6253010261fc026d520ff7cf4f070c664092c877a33dd] <==
	I0819 21:11:00.452711       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 21:11:30.455286       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-127648 -n old-k8s-version-127648
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-127648 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-4glzw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-127648 describe pod metrics-server-9975d5f86-4glzw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-127648 describe pod metrics-server-9975d5f86-4glzw: exit status 1 (121.740772ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-4glzw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-127648 describe pod metrics-server-9975d5f86-4glzw: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (384.83s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.81
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 9.55
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 151.83
31 TestAddons/serial/GCPAuth/Namespaces 0.18
33 TestAddons/parallel/Registry 15
34 TestAddons/parallel/Ingress 18.9
35 TestAddons/parallel/InspektorGadget 11.84
36 TestAddons/parallel/MetricsServer 7.05
39 TestAddons/parallel/CSI 66.08
40 TestAddons/parallel/Headlamp 16.07
41 TestAddons/parallel/CloudSpanner 6.75
42 TestAddons/parallel/LocalPath 51.79
43 TestAddons/parallel/NvidiaDevicePlugin 5.61
44 TestAddons/parallel/Yakd 11.97
45 TestAddons/StoppedEnableDisable 12.3
46 TestCertOptions 40.18
47 TestCertExpiration 230.23
49 TestForceSystemdFlag 41.44
50 TestForceSystemdEnv 42.23
51 TestDockerEnvContainerd 45.39
56 TestErrorSpam/setup 29.4
57 TestErrorSpam/start 0.77
58 TestErrorSpam/status 1.07
59 TestErrorSpam/pause 1.89
60 TestErrorSpam/unpause 2.15
61 TestErrorSpam/stop 1.5
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 48.96
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 7.16
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.25
73 TestFunctional/serial/CacheCmd/cache/add_local 1.27
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.09
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 44.56
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.66
84 TestFunctional/serial/LogsFileCmd 1.98
85 TestFunctional/serial/InvalidService 4.66
87 TestFunctional/parallel/ConfigCmd 0.48
88 TestFunctional/parallel/DashboardCmd 9.28
89 TestFunctional/parallel/DryRun 0.44
90 TestFunctional/parallel/InternationalLanguage 0.22
91 TestFunctional/parallel/StatusCmd 1.26
95 TestFunctional/parallel/ServiceCmdConnect 10.7
96 TestFunctional/parallel/AddonsCmd 0.17
97 TestFunctional/parallel/PersistentVolumeClaim 26.15
99 TestFunctional/parallel/SSHCmd 0.71
100 TestFunctional/parallel/CpCmd 2.3
102 TestFunctional/parallel/FileSync 0.34
103 TestFunctional/parallel/CertSync 2.06
107 TestFunctional/parallel/NodeLabels 0.11
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
111 TestFunctional/parallel/License 0.24
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.41
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
125 TestFunctional/parallel/ServiceCmd/List 0.65
126 TestFunctional/parallel/ProfileCmd/profile_list 0.42
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
130 TestFunctional/parallel/MountCmd/any-port 6.5
131 TestFunctional/parallel/ServiceCmd/Format 0.43
132 TestFunctional/parallel/ServiceCmd/URL 0.49
133 TestFunctional/parallel/MountCmd/specific-port 1.31
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.65
135 TestFunctional/parallel/Version/short 0.09
136 TestFunctional/parallel/Version/components 1.36
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.14
142 TestFunctional/parallel/ImageCommands/Setup 0.66
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.44
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.27
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.64
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 110.38
160 TestMultiControlPlane/serial/DeployApp 30.55
161 TestMultiControlPlane/serial/PingHostFromPods 1.65
162 TestMultiControlPlane/serial/AddWorkerNode 24.65
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.75
165 TestMultiControlPlane/serial/CopyFile 19.43
166 TestMultiControlPlane/serial/StopSecondaryNode 12.98
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.55
168 TestMultiControlPlane/serial/RestartSecondaryNode 19.21
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.82
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 143.57
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.68
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.55
173 TestMultiControlPlane/serial/StopCluster 36.16
174 TestMultiControlPlane/serial/RestartCluster 79.94
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.56
176 TestMultiControlPlane/serial/AddSecondaryNode 41.68
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.77
181 TestJSONOutput/start/Command 60.04
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.73
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.65
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.79
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.23
206 TestKicCustomNetwork/create_custom_network 40.83
207 TestKicCustomNetwork/use_default_bridge_network 34.41
208 TestKicExistingNetwork 35.65
209 TestKicCustomSubnet 33.7
210 TestKicStaticIP 33.41
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 71.33
215 TestMountStart/serial/StartWithMountFirst 6.48
216 TestMountStart/serial/VerifyMountFirst 0.3
217 TestMountStart/serial/StartWithMountSecond 7.31
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.64
220 TestMountStart/serial/VerifyMountPostDelete 0.25
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.87
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 65.88
227 TestMultiNode/serial/DeployApp2Nodes 15.04
228 TestMultiNode/serial/PingHostFrom2Pods 1.05
229 TestMultiNode/serial/AddNode 16.3
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.32
232 TestMultiNode/serial/CopyFile 9.99
233 TestMultiNode/serial/StopNode 2.25
234 TestMultiNode/serial/StartAfterStop 9.5
235 TestMultiNode/serial/RestartKeepsNodes 91.73
236 TestMultiNode/serial/DeleteNode 5.52
237 TestMultiNode/serial/StopMultiNode 24.05
238 TestMultiNode/serial/RestartMultiNode 47.65
239 TestMultiNode/serial/ValidateNameConflict 34.28
244 TestPreload 113.66
246 TestScheduledStopUnix 110.22
249 TestInsufficientStorage 10.46
250 TestRunningBinaryUpgrade 100.51
252 TestKubernetesUpgrade 352.29
253 TestMissingContainerUpgrade 152.08
255 TestPause/serial/Start 58.71
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
258 TestNoKubernetes/serial/StartWithK8s 41.53
259 TestNoKubernetes/serial/StartWithStopK8s 17.89
260 TestPause/serial/SecondStartNoReconfiguration 7.48
261 TestNoKubernetes/serial/Start 9.57
262 TestPause/serial/Pause 0.89
263 TestPause/serial/VerifyStatus 0.4
264 TestPause/serial/Unpause 0.87
265 TestPause/serial/PauseAgain 1.17
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
267 TestNoKubernetes/serial/ProfileList 2.92
268 TestPause/serial/DeletePaused 2.85
269 TestNoKubernetes/serial/Stop 1.26
270 TestPause/serial/VerifyDeletedResources 2.81
271 TestNoKubernetes/serial/StartNoArgs 6.88
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
273 TestStoppedBinaryUpgrade/Setup 0.8
274 TestStoppedBinaryUpgrade/Upgrade 107.34
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
290 TestNetworkPlugins/group/false 5.08
295 TestStartStop/group/old-k8s-version/serial/FirstStart 177.33
297 TestStartStop/group/no-preload/serial/FirstStart 75.53
298 TestStartStop/group/old-k8s-version/serial/DeployApp 8.88
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.83
300 TestStartStop/group/old-k8s-version/serial/Stop 12.85
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
303 TestStartStop/group/no-preload/serial/DeployApp 8.43
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
305 TestStartStop/group/no-preload/serial/Stop 12.13
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/no-preload/serial/SecondStart 267.4
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
311 TestStartStop/group/no-preload/serial/Pause 3.23
313 TestStartStop/group/embed-certs/serial/FirstStart 65.51
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.02
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
317 TestStartStop/group/old-k8s-version/serial/Pause 2.99
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.66
320 TestStartStop/group/embed-certs/serial/DeployApp 9.49
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.55
322 TestStartStop/group/embed-certs/serial/Stop 12.56
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.36
324 TestStartStop/group/embed-certs/serial/SecondStart 267.51
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.4
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.18
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.24
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
333 TestStartStop/group/embed-certs/serial/Pause 3
335 TestStartStop/group/newest-cni/serial/FirstStart 38.06
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.15
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.54
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.18
342 TestStartStop/group/newest-cni/serial/Stop 1.41
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
344 TestStartStop/group/newest-cni/serial/SecondStart 19.64
345 TestNetworkPlugins/group/auto/Start 61.66
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
349 TestStartStop/group/newest-cni/serial/Pause 3.5
350 TestNetworkPlugins/group/kindnet/Start 63.83
351 TestNetworkPlugins/group/auto/KubeletFlags 0.32
352 TestNetworkPlugins/group/auto/NetCatPod 11.35
353 TestNetworkPlugins/group/auto/DNS 0.2
354 TestNetworkPlugins/group/auto/Localhost 0.15
355 TestNetworkPlugins/group/auto/HairPin 0.18
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/Start 72.68
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
359 TestNetworkPlugins/group/kindnet/NetCatPod 9.42
360 TestNetworkPlugins/group/kindnet/DNS 0.24
361 TestNetworkPlugins/group/kindnet/Localhost 0.19
362 TestNetworkPlugins/group/kindnet/HairPin 0.17
363 TestNetworkPlugins/group/custom-flannel/Start 53.8
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.33
366 TestNetworkPlugins/group/calico/NetCatPod 9.26
367 TestNetworkPlugins/group/calico/DNS 0.21
368 TestNetworkPlugins/group/calico/Localhost 0.18
369 TestNetworkPlugins/group/calico/HairPin 0.19
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.3
372 TestNetworkPlugins/group/custom-flannel/DNS 0.31
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
375 TestNetworkPlugins/group/enable-default-cni/Start 51.46
376 TestNetworkPlugins/group/flannel/Start 57.15
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.4
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
384 TestNetworkPlugins/group/flannel/NetCatPod 9.35
385 TestNetworkPlugins/group/bridge/Start 81.58
386 TestNetworkPlugins/group/flannel/DNS 0.28
387 TestNetworkPlugins/group/flannel/Localhost 0.2
388 TestNetworkPlugins/group/flannel/HairPin 0.2
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
390 TestNetworkPlugins/group/bridge/NetCatPod 10.27
391 TestNetworkPlugins/group/bridge/DNS 0.2
392 TestNetworkPlugins/group/bridge/Localhost 0.19
393 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.20.0/json-events (8.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-166545 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-166545 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.812590441s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-166545
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-166545: exit status 85 (73.906027ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-166545 | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |          |
	|         | -p download-only-166545        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 20:21:03
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 20:21:03.127443 1145023 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:21:03.127651 1145023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:21:03.127679 1145023 out.go:358] Setting ErrFile to fd 2...
	I0819 20:21:03.127698 1145023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:21:03.127984 1145023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
	W0819 20:21:03.128173 1145023 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19423-1139612/.minikube/config/config.json: open /home/jenkins/minikube-integration/19423-1139612/.minikube/config/config.json: no such file or directory
	I0819 20:21:03.128680 1145023 out.go:352] Setting JSON to true
	I0819 20:21:03.129615 1145023 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":14610,"bootTime":1724084253,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 20:21:03.129720 1145023 start.go:139] virtualization:  
	I0819 20:21:03.133106 1145023 out.go:97] [download-only-166545] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0819 20:21:03.133288 1145023 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19423-1139612/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 20:21:03.133328 1145023 notify.go:220] Checking for updates...
	I0819 20:21:03.135844 1145023 out.go:169] MINIKUBE_LOCATION=19423
	I0819 20:21:03.139043 1145023 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 20:21:03.141250 1145023 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	I0819 20:21:03.143211 1145023 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	I0819 20:21:03.146329 1145023 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0819 20:21:03.149504 1145023 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 20:21:03.149778 1145023 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 20:21:03.173896 1145023 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 20:21:03.174010 1145023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:21:03.235113 1145023 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 20:21:03.224992049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:21:03.235236 1145023 docker.go:307] overlay module found
	I0819 20:21:03.239429 1145023 out.go:97] Using the docker driver based on user configuration
	I0819 20:21:03.239460 1145023 start.go:297] selected driver: docker
	I0819 20:21:03.239468 1145023 start.go:901] validating driver "docker" against <nil>
	I0819 20:21:03.239573 1145023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:21:03.291769 1145023 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 20:21:03.282462607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:21:03.291941 1145023 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 20:21:03.292280 1145023 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 20:21:03.292446 1145023 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 20:21:03.294631 1145023 out.go:169] Using Docker driver with root privileges
	I0819 20:21:03.296079 1145023 cni.go:84] Creating CNI manager for ""
	I0819 20:21:03.296108 1145023 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 20:21:03.296122 1145023 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 20:21:03.296210 1145023 start.go:340] cluster config:
	{Name:download-only-166545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-166545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:21:03.298845 1145023 out.go:97] Starting "download-only-166545" primary control-plane node in "download-only-166545" cluster
	I0819 20:21:03.298867 1145023 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 20:21:03.301108 1145023 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0819 20:21:03.301147 1145023 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 20:21:03.301309 1145023 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 20:21:03.317574 1145023 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 20:21:03.318167 1145023 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 20:21:03.318272 1145023 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 20:21:03.372688 1145023 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0819 20:21:03.372725 1145023 cache.go:56] Caching tarball of preloaded images
	I0819 20:21:03.373465 1145023 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 20:21:03.375119 1145023 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 20:21:03.375138 1145023 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 20:21:03.468999 1145023 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19423-1139612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0819 20:21:07.371891 1145023 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	
	
	* The control-plane node download-only-166545 host does not exist
	  To start a cluster, run: "minikube start -p download-only-166545"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-166545
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (9.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-327825 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-327825 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.544834992s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (9.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-327825
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-327825: exit status 85 (66.707956ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-166545 | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | -p download-only-166545        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| delete  | -p download-only-166545        | download-only-166545 | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| start   | -o=json --download-only        | download-only-327825 | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | -p download-only-327825        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 20:21:12
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 20:21:12.347921 1145227 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:21:12.348135 1145227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:21:12.348185 1145227 out.go:358] Setting ErrFile to fd 2...
	I0819 20:21:12.348206 1145227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:21:12.348505 1145227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
	I0819 20:21:12.348986 1145227 out.go:352] Setting JSON to true
	I0819 20:21:12.349952 1145227 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":14619,"bootTime":1724084253,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 20:21:12.350044 1145227 start.go:139] virtualization:  
	I0819 20:21:12.352561 1145227 out.go:97] [download-only-327825] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 20:21:12.352849 1145227 notify.go:220] Checking for updates...
	I0819 20:21:12.354429 1145227 out.go:169] MINIKUBE_LOCATION=19423
	I0819 20:21:12.355815 1145227 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 20:21:12.357284 1145227 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	I0819 20:21:12.358491 1145227 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	I0819 20:21:12.359725 1145227 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0819 20:21:12.362124 1145227 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 20:21:12.362423 1145227 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 20:21:12.383752 1145227 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 20:21:12.383869 1145227 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:21:12.451920 1145227 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 20:21:12.442619594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:21:12.452041 1145227 docker.go:307] overlay module found
	I0819 20:21:12.453231 1145227 out.go:97] Using the docker driver based on user configuration
	I0819 20:21:12.453259 1145227 start.go:297] selected driver: docker
	I0819 20:21:12.453266 1145227 start.go:901] validating driver "docker" against <nil>
	I0819 20:21:12.453373 1145227 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:21:12.505450 1145227 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 20:21:12.496483039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:21:12.505630 1145227 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 20:21:12.505914 1145227 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 20:21:12.506079 1145227 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 20:21:12.507602 1145227 out.go:169] Using Docker driver with root privileges
	I0819 20:21:12.508739 1145227 cni.go:84] Creating CNI manager for ""
	I0819 20:21:12.508761 1145227 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 20:21:12.508774 1145227 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 20:21:12.508857 1145227 start.go:340] cluster config:
	{Name:download-only-327825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-327825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:21:12.510269 1145227 out.go:97] Starting "download-only-327825" primary control-plane node in "download-only-327825" cluster
	I0819 20:21:12.510289 1145227 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 20:21:12.511203 1145227 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0819 20:21:12.511231 1145227 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 20:21:12.511394 1145227 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 20:21:12.526727 1145227 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 20:21:12.526874 1145227 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 20:21:12.526898 1145227 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 20:21:12.526907 1145227 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 20:21:12.526918 1145227 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 20:21:12.573718 1145227 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0819 20:21:12.573763 1145227 cache.go:56] Caching tarball of preloaded images
	I0819 20:21:12.573942 1145227 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 20:21:12.575637 1145227 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 20:21:12.575666 1145227 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 20:21:12.665697 1145227 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/19423-1139612/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-327825 host does not exist
	  To start a cluster, run: "minikube start -p download-only-327825"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-327825
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-258519 --alsologtostderr --binary-mirror http://127.0.0.1:40993 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-258519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-258519
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-069800
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-069800: exit status 85 (81.429385ms)

                                                
                                                
-- stdout --
	* Profile "addons-069800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-069800"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-069800
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-069800: exit status 85 (70.251597ms)

                                                
                                                
-- stdout --
	* Profile "addons-069800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-069800"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (151.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-069800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-069800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m31.823137363s)
--- PASS: TestAddons/Setup (151.83s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-069800 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-069800 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.171696ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-gr6wk" [82b3deb3-5365-4530-b213-a092c0d9a803] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00627125s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hq5bw" [dfa32c3a-0d40-440e-885e-5723158e7561] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0042468s
addons_test.go:342: (dbg) Run:  kubectl --context addons-069800 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-069800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-069800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.999083125s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-069800 ip
2024/08/19 20:27:49 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-069800 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-069800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-069800 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-069800 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [030d9bff-320e-4609-b762-75d99bc14a60] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [030d9bff-320e-4609-b762-75d99bc14a60] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003304412s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-069800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-069800 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-069800 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-069800 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-069800 addons disable ingress-dns --alsologtostderr -v=1: (1.285544293s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-069800 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-069800 addons disable ingress --alsologtostderr -v=1: (7.896290037s)
--- PASS: TestAddons/parallel/Ingress (18.90s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-42v65" [3ead55b4-b3a1-4534-a5c8-5f8979f60bb4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004500192s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-069800
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-069800: (5.8375559s)
--- PASS: TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.05s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.11989ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-ldtcv" [e82f97ea-f8d4-47b6-a217-6e5677777532] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004577879s
addons_test.go:417: (dbg) Run:  kubectl --context addons-069800 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-069800 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.05s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.681863ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-069800 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-069800 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [563ef02c-ee2e-49b7-8e81-35de4a5cd945] Pending
helpers_test.go:344: "task-pv-pod" [563ef02c-ee2e-49b7-8e81-35de4a5cd945] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [563ef02c-ee2e-49b7-8e81-35de4a5cd945] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003912911s
addons_test.go:590: (dbg) Run:  kubectl --context addons-069800 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-069800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-069800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-069800 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-069800 delete pod task-pv-pod: (1.234066356s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-069800 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-069800 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-069800 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [145647e1-3b2b-45fe-99e7-0497a46ea455] Pending
helpers_test.go:344: "task-pv-pod-restore" [145647e1-3b2b-45fe-99e7-0497a46ea455] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [145647e1-3b2b-45fe-99e7-0497a46ea455] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004227172s
addons_test.go:632: (dbg) Run:  kubectl --context addons-069800 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-069800 delete pod task-pv-pod-restore: (1.395777324s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-069800 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-069800 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-069800 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-069800 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.783111037s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-069800 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (66.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-069800 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-069800 --alsologtostderr -v=1: (1.291149068s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-6vvgk" [cbc2b1ba-be4e-4217-a66b-72156042ac9d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-6vvgk" [cbc2b1ba-be4e-4217-a66b-72156042ac9d] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003947602s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-069800 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-069800 addons disable headlamp --alsologtostderr -v=1: (5.778255645s)
--- PASS: TestAddons/parallel/Headlamp (16.07s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-wv6n4" [69b4e6bf-3af0-46e1-acc8-fd87bf7ac7d2] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.009254673s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-069800
--- PASS: TestAddons/parallel/CloudSpanner (6.75s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.79s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-069800 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-069800 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-069800 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b3dcf731-fcbd-4387-8be4-1204674708d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b3dcf731-fcbd-4387-8be4-1204674708d0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b3dcf731-fcbd-4387-8be4-1204674708d0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003874199s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-069800 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-069800 ssh "cat /opt/local-path-provisioner/pvc-c3668b49-65e1-496e-856c-a3fa76954900_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-069800 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-069800 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-069800 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-069800 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.573057619s)
--- PASS: TestAddons/parallel/LocalPath (51.79s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qrxrs" [7b5516bf-34eb-427f-9819-d72c940d96e6] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003894398s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-069800
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-5lkpw" [81fb729b-eb0e-4bd7-85b3-70d63789f859] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003241573s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-069800 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-069800 addons disable yakd --alsologtostderr -v=1: (5.96625048s)
--- PASS: TestAddons/parallel/Yakd (11.97s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-069800
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-069800: (12.024516783s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-069800
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-069800
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-069800
--- PASS: TestAddons/StoppedEnableDisable (12.30s)

                                                
                                    
x
+
TestCertOptions (40.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-913353 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-913353 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (37.508204671s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-913353 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-913353 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-913353 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-913353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-913353
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-913353: (1.972644677s)
--- PASS: TestCertOptions (40.18s)

                                                
                                    
x
+
TestCertExpiration (230.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-557259 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-557259 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.248407544s)
E0819 21:06:58.648565 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-557259 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-557259 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.5323814s)
helpers_test.go:175: Cleaning up "cert-expiration-557259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-557259
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-557259: (2.451630918s)
--- PASS: TestCertExpiration (230.23s)

                                                
                                    
x
+
TestForceSystemdFlag (41.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-906634 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-906634 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.740954586s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-906634 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-906634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-906634
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-906634: (2.275554503s)
--- PASS: TestForceSystemdFlag (41.44s)

                                                
                                    
x
+
TestForceSystemdEnv (42.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-705767 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-705767 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.434836145s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-705767 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-705767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-705767
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-705767: (2.386341661s)
--- PASS: TestForceSystemdEnv (42.23s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.39s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-668559 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-668559 --driver=docker  --container-runtime=containerd: (29.879595149s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-668559"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-DbZ4voFNSvKM/agent.1164074" SSH_AGENT_PID="1164075" DOCKER_HOST=ssh://docker@127.0.0.1:33933 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-DbZ4voFNSvKM/agent.1164074" SSH_AGENT_PID="1164075" DOCKER_HOST=ssh://docker@127.0.0.1:33933 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-DbZ4voFNSvKM/agent.1164074" SSH_AGENT_PID="1164075" DOCKER_HOST=ssh://docker@127.0.0.1:33933 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.071768983s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-DbZ4voFNSvKM/agent.1164074" SSH_AGENT_PID="1164075" DOCKER_HOST=ssh://docker@127.0.0.1:33933 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-668559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-668559
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-668559: (1.986423744s)
--- PASS: TestDockerEnvContainerd (45.39s)

                                                
                                    
x
+
TestErrorSpam/setup (29.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-461834 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-461834 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-461834 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-461834 --driver=docker  --container-runtime=containerd: (29.40062331s)
--- PASS: TestErrorSpam/setup (29.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 pause
--- PASS: TestErrorSpam/pause (1.89s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.15s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 unpause
--- PASS: TestErrorSpam/unpause (2.15s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 stop: (1.296410824s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461834 --log_dir /tmp/nospam-461834 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19423-1139612/.minikube/files/etc/test/nested/copy/1145018/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-219483 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-219483 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (48.958748592s)
--- PASS: TestFunctional/serial/StartWithProxy (48.96s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.16s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-219483 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-219483 --alsologtostderr -v=8: (7.153767779s)
functional_test.go:663: soft start took 7.160531462s for "functional-219483" cluster.
--- PASS: TestFunctional/serial/SoftStart (7.16s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-219483 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-219483 cache add registry.k8s.io/pause:3.1: (1.594313809s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-219483 cache add registry.k8s.io/pause:3.3: (1.417589454s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-219483 cache add registry.k8s.io/pause:latest: (1.237177961s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-219483 /tmp/TestFunctionalserialCacheCmdcacheadd_local1801434871/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 cache add minikube-local-cache-test:functional-219483
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 cache delete minikube-local-cache-test:functional-219483
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-219483
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-219483 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (312.557748ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-219483 cache reload: (1.149864263s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 kubectl -- --context functional-219483 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-219483 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.56s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-219483 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-219483 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.552928949s)
functional_test.go:761: restart took 44.55303279s for "functional-219483" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.56s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-219483 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-219483 logs: (1.658285752s)
--- PASS: TestFunctional/serial/LogsCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 logs --file /tmp/TestFunctionalserialLogsFileCmd3310931826/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-219483 logs --file /tmp/TestFunctionalserialLogsFileCmd3310931826/001/logs.txt: (1.981227953s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.98s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.66s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-219483 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-219483
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-219483: exit status 115 (563.870984ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32646 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-219483 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.66s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-219483 config get cpus: exit status 14 (81.570908ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-219483 config get cpus: exit status 14 (68.495748ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-219483 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-219483 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1178979: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.28s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-219483 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-219483 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (188.396407ms)

                                                
                                                
-- stdout --
	* [functional-219483] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 20:33:37.891709 1178673 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:33:37.891912 1178673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:33:37.891946 1178673 out.go:358] Setting ErrFile to fd 2...
	I0819 20:33:37.891977 1178673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:33:37.892287 1178673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
	I0819 20:33:37.892736 1178673 out.go:352] Setting JSON to false
	I0819 20:33:37.893867 1178673 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15365,"bootTime":1724084253,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 20:33:37.893978 1178673 start.go:139] virtualization:  
	I0819 20:33:37.895837 1178673 out.go:177] * [functional-219483] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 20:33:37.898327 1178673 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 20:33:37.898393 1178673 notify.go:220] Checking for updates...
	I0819 20:33:37.901383 1178673 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 20:33:37.903168 1178673 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	I0819 20:33:37.904965 1178673 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	I0819 20:33:37.906312 1178673 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 20:33:37.907559 1178673 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 20:33:37.909520 1178673 config.go:182] Loaded profile config "functional-219483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 20:33:37.910060 1178673 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 20:33:37.930427 1178673 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 20:33:37.930546 1178673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:33:38.003114 1178673 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 20:33:37.98564829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:33:38.003234 1178673 docker.go:307] overlay module found
	I0819 20:33:38.005343 1178673 out.go:177] * Using the docker driver based on existing profile
	I0819 20:33:38.007224 1178673 start.go:297] selected driver: docker
	I0819 20:33:38.007256 1178673 start.go:901] validating driver "docker" against &{Name:functional-219483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-219483 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:33:38.007382 1178673 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 20:33:38.010229 1178673 out.go:201] 
	W0819 20:33:38.011457 1178673 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 20:33:38.013319 1178673 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-219483 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-219483 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-219483 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (217.181847ms)

                                                
                                                
-- stdout --
	* [functional-219483] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 20:33:37.668486 1178563 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:33:37.668645 1178563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:33:37.668670 1178563 out.go:358] Setting ErrFile to fd 2...
	I0819 20:33:37.668687 1178563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:33:37.669732 1178563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
	I0819 20:33:37.670196 1178563 out.go:352] Setting JSON to false
	I0819 20:33:37.671264 1178563 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15364,"bootTime":1724084253,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 20:33:37.671340 1178563 start.go:139] virtualization:  
	I0819 20:33:37.673428 1178563 out.go:177] * [functional-219483] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0819 20:33:37.675375 1178563 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 20:33:37.675576 1178563 notify.go:220] Checking for updates...
	I0819 20:33:37.678057 1178563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 20:33:37.679877 1178563 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	I0819 20:33:37.681723 1178563 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	I0819 20:33:37.683439 1178563 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 20:33:37.684984 1178563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 20:33:37.686869 1178563 config.go:182] Loaded profile config "functional-219483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 20:33:37.687510 1178563 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 20:33:37.718224 1178563 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 20:33:37.718363 1178563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:33:37.817033 1178563 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 20:33:37.80465921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:33:37.817198 1178563 docker.go:307] overlay module found
	I0819 20:33:37.819028 1178563 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0819 20:33:37.820295 1178563 start.go:297] selected driver: docker
	I0819 20:33:37.820341 1178563 start.go:901] validating driver "docker" against &{Name:functional-219483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-219483 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:33:37.820473 1178563 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 20:33:37.822093 1178563 out.go:201] 
	W0819 20:33:37.823775 1178563 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 20:33:37.825046 1178563 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-219483 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-219483 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-77r4m" [04fed0d6-2ca9-4863-8423-f7d2fb91d3da] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-77r4m" [04fed0d6-2ca9-4863-8423-f7d2fb91d3da] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003864122s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32528
functional_test.go:1675: http://192.168.49.2:32528: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-77r4m

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32528
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c431a589-9447-4ce8-a58b-7e22a720ed8e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003801604s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-219483 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-219483 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-219483 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-219483 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [415d0d84-ceab-4062-aa9a-de1c6c2ce099] Pending
helpers_test.go:344: "sp-pod" [415d0d84-ceab-4062-aa9a-de1c6c2ce099] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [415d0d84-ceab-4062-aa9a-de1c6c2ce099] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003773824s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-219483 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-219483 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-219483 delete -f testdata/storage-provisioner/pod.yaml: (2.061430171s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-219483 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0a31f3b3-854f-4529-9303-d75a302bc504] Pending
helpers_test.go:344: "sp-pod" [0a31f3b3-854f-4529-9303-d75a302bc504] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004838021s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-219483 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh -n functional-219483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 cp functional-219483:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd310698869/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh -n functional-219483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh -n functional-219483 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1145018/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "sudo cat /etc/test/nested/copy/1145018/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1145018.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "sudo cat /etc/ssl/certs/1145018.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1145018.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "sudo cat /usr/share/ca-certificates/1145018.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/11450182.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "sudo cat /etc/ssl/certs/11450182.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/11450182.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "sudo cat /usr/share/ca-certificates/11450182.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-219483 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-219483 ssh "sudo systemctl is-active docker": exit status 1 (302.029936ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-219483 ssh "sudo systemctl is-active crio": exit status 1 (355.449196ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-219483 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-219483 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-219483 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1176269: os: process already finished
helpers_test.go:502: unable to terminate pid 1176073: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-219483 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-219483 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-219483 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b5f69588-74c8-41a8-9af4-cbbc7c6a620f] Pending
helpers_test.go:344: "nginx-svc" [b5f69588-74c8-41a8-9af4-cbbc7c6a620f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b5f69588-74c8-41a8-9af4-cbbc7c6a620f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003955874s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-219483 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.253.8 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-219483 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-219483 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-219483 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-9sthh" [8b73d2f5-3140-415f-93e3-6120abbfd78a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-9sthh" [8b73d2f5-3140-415f-93e3-6120abbfd78a] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004108538s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "355.071431ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "64.436352ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 service list -o json
functional_test.go:1494: Took "612.250266ms" to run "out/minikube-linux-arm64 -p functional-219483 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "420.98737ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "64.319908ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32708
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-219483 /tmp/TestFunctionalparallelMountCmdany-port2186798070/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724099614926563710" to /tmp/TestFunctionalparallelMountCmdany-port2186798070/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724099614926563710" to /tmp/TestFunctionalparallelMountCmdany-port2186798070/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724099614926563710" to /tmp/TestFunctionalparallelMountCmdany-port2186798070/001/test-1724099614926563710
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-219483 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (445.570763ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 20:33 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 20:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 20:33 test-1724099614926563710
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh cat /mount-9p/test-1724099614926563710
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-219483 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1596b344-216a-4061-af65-ee5cfe3ec5b4] Pending
helpers_test.go:344: "busybox-mount" [1596b344-216a-4061-af65-ee5cfe3ec5b4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1596b344-216a-4061-af65-ee5cfe3ec5b4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1596b344-216a-4061-af65-ee5cfe3ec5b4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003871242s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-219483 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-219483 /tmp/TestFunctionalparallelMountCmdany-port2186798070/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32708
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-219483 /tmp/TestFunctionalparallelMountCmdspecific-port1033371473/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-219483 /tmp/TestFunctionalparallelMountCmdspecific-port1033371473/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-219483 ssh "sudo umount -f /mount-9p": exit status 1 (350.061306ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-219483 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-219483 /tmp/TestFunctionalparallelMountCmdspecific-port1033371473/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-219483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2906440289/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-219483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2906440289/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-219483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2906440289/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-219483 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-219483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2906440289/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-219483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2906440289/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-219483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2906440289/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-219483 version -o=json --components: (1.363492173s)
--- PASS: TestFunctional/parallel/Version/components (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-219483 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-219483
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-219483
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-219483 image ls --format short --alsologtostderr:
I0819 20:33:52.497188 1181471 out.go:345] Setting OutFile to fd 1 ...
I0819 20:33:52.497387 1181471 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:33:52.497414 1181471 out.go:358] Setting ErrFile to fd 2...
I0819 20:33:52.497432 1181471 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:33:52.497696 1181471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
I0819 20:33:52.498368 1181471 config.go:182] Loaded profile config "functional-219483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 20:33:52.498554 1181471 config.go:182] Loaded profile config "functional-219483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 20:33:52.499080 1181471 cli_runner.go:164] Run: docker container inspect functional-219483 --format={{.State.Status}}
I0819 20:33:52.523421 1181471 ssh_runner.go:195] Run: systemctl --version
I0819 20:33:52.523477 1181471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-219483
I0819 20:33:52.546767 1181471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/functional-219483/id_rsa Username:docker}
I0819 20:33:52.644654 1181471 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-219483 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kicbase/echo-server               | functional-219483  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | alpine             | sha256:70594c | 19.6MB |
| docker.io/library/nginx                     | latest             | sha256:a9dfdb | 67.7MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/minikube-local-cache-test | functional-219483  | sha256:418630 | 991B   |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-219483 image ls --format table --alsologtostderr:
I0819 20:33:52.778808 1181541 out.go:345] Setting OutFile to fd 1 ...
I0819 20:33:52.781482 1181541 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:33:52.781535 1181541 out.go:358] Setting ErrFile to fd 2...
I0819 20:33:52.781557 1181541 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:33:52.781857 1181541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
I0819 20:33:52.782556 1181541 config.go:182] Loaded profile config "functional-219483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 20:33:52.782756 1181541 config.go:182] Loaded profile config "functional-219483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 20:33:52.783286 1181541 cli_runner.go:164] Run: docker container inspect functional-219483 --format={{.State.Status}}
I0819 20:33:52.802551 1181541 ssh_runner.go:195] Run: systemctl --version
I0819 20:33:52.802602 1181541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-219483
I0819 20:33:52.829434 1181541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/functional-219483/id_rsa Username:docker}
I0819 20:33:52.920804 1181541 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-219483 image ls --format json --alsologtostderr:
[{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39
d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:6a
23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/
library/nginx:alpine"],"size":"19627164"},{"id":"sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add"],"repoTags":["docker.io/library/nginx:latest"],"size":"67690150"},{"id":"sha256:418630b54470e8f9127661ad2b9885e8758897ec75f305a67ace5b4d73e92f81","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-219483"],"size":"991"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-219483"],"size":"2173567"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3
cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-219483 image ls --format json --alsologtostderr:
I0819 20:33:53.028306 1181623 out.go:345] Setting OutFile to fd 1 ...
I0819 20:33:53.028572 1181623 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:33:53.028586 1181623 out.go:358] Setting ErrFile to fd 2...
I0819 20:33:53.028592 1181623 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:33:53.028946 1181623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
I0819 20:33:53.029702 1181623 config.go:182] Loaded profile config "functional-219483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 20:33:53.029864 1181623 config.go:182] Loaded profile config "functional-219483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 20:33:53.030394 1181623 cli_runner.go:164] Run: docker container inspect functional-219483 --format={{.State.Status}}
I0819 20:33:53.048053 1181623 ssh_runner.go:195] Run: systemctl --version
I0819 20:33:53.048113 1181623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-219483
I0819 20:33:53.076393 1181623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/functional-219483/id_rsa Username:docker}
I0819 20:33:53.175760 1181623 ssh_runner.go:195] Run: sudo crictl images --output json
E0819 20:33:55.580534 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:33:55.587728 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:33:55.599053 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-219483 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-219483
size: "2173567"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "19627164"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
repoTags:
- docker.io/library/nginx:latest
size: "67690150"
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:418630b54470e8f9127661ad2b9885e8758897ec75f305a67ace5b4d73e92f81
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-219483
size: "991"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-219483 image ls --format yaml --alsologtostderr:
I0819 20:33:52.502009 1181472 out.go:345] Setting OutFile to fd 1 ...
I0819 20:33:52.502145 1181472 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:33:52.502156 1181472 out.go:358] Setting ErrFile to fd 2...
I0819 20:33:52.502162 1181472 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:33:52.502422 1181472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
I0819 20:33:52.503055 1181472 config.go:182] Loaded profile config "functional-219483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 20:33:52.503185 1181472 config.go:182] Loaded profile config "functional-219483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 20:33:52.503679 1181472 cli_runner.go:164] Run: docker container inspect functional-219483 --format={{.State.Status}}
I0819 20:33:52.535426 1181472 ssh_runner.go:195] Run: systemctl --version
I0819 20:33:52.535495 1181472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-219483
I0819 20:33:52.560751 1181472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/functional-219483/id_rsa Username:docker}
I0819 20:33:52.652692 1181472 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-219483 ssh pgrep buildkitd: exit status 1 (358.374328ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image build -t localhost/my-image:functional-219483 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-219483 image build -t localhost/my-image:functional-219483 testdata/build --alsologtostderr: (2.548356622s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-219483 image build -t localhost/my-image:functional-219483 testdata/build --alsologtostderr:
I0819 20:33:53.129359 1181645 out.go:345] Setting OutFile to fd 1 ...
I0819 20:33:53.129928 1181645 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:33:53.129943 1181645 out.go:358] Setting ErrFile to fd 2...
I0819 20:33:53.129954 1181645 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:33:53.130211 1181645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
I0819 20:33:53.130890 1181645 config.go:182] Loaded profile config "functional-219483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 20:33:53.131596 1181645 config.go:182] Loaded profile config "functional-219483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 20:33:53.132203 1181645 cli_runner.go:164] Run: docker container inspect functional-219483 --format={{.State.Status}}
I0819 20:33:53.149171 1181645 ssh_runner.go:195] Run: systemctl --version
I0819 20:33:53.149233 1181645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-219483
I0819 20:33:53.165114 1181645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/functional-219483/id_rsa Username:docker}
I0819 20:33:53.264463 1181645 build_images.go:161] Building image from path: /tmp/build.1707016024.tar
I0819 20:33:53.264532 1181645 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 20:33:53.273081 1181645 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1707016024.tar
I0819 20:33:53.276201 1181645 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1707016024.tar: stat -c "%s %y" /var/lib/minikube/build/build.1707016024.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1707016024.tar': No such file or directory
I0819 20:33:53.276249 1181645 ssh_runner.go:362] scp /tmp/build.1707016024.tar --> /var/lib/minikube/build/build.1707016024.tar (3072 bytes)
I0819 20:33:53.300304 1181645 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1707016024
I0819 20:33:53.308904 1181645 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1707016024 -xf /var/lib/minikube/build/build.1707016024.tar
I0819 20:33:53.317989 1181645 containerd.go:394] Building image: /var/lib/minikube/build/build.1707016024
I0819 20:33:53.318069 1181645 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1707016024 --local dockerfile=/var/lib/minikube/build/build.1707016024 --output type=image,name=localhost/my-image:functional-219483
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:86de8591f792bbe2b56157ff816fb0b4f9f860d1b9ef809555cb1db15f950868
#8 exporting manifest sha256:86de8591f792bbe2b56157ff816fb0b4f9f860d1b9ef809555cb1db15f950868 done
#8 exporting config sha256:2c303478521f308c8b06f5499caca092ce05b610c9d34822e723843a3036bb4a 0.0s done
#8 naming to localhost/my-image:functional-219483 done
#8 DONE 0.1s
I0819 20:33:55.582389 1181645 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1707016024 --local dockerfile=/var/lib/minikube/build/build.1707016024 --output type=image,name=localhost/my-image:functional-219483: (2.264286961s)
I0819 20:33:55.582522 1181645 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1707016024
I0819 20:33:55.592787 1181645 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1707016024.tar
I0819 20:33:55.603156 1181645 build_images.go:217] Built localhost/my-image:functional-219483 from /tmp/build.1707016024.tar
I0819 20:33:55.603190 1181645 build_images.go:133] succeeded building to: functional-219483
I0819 20:33:55.603196 1181645 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image ls
E0819 20:33:55.620809 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:33:55.662110 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:33:55.743492 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-219483
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image load --daemon kicbase/echo-server:functional-219483 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-219483 image load --daemon kicbase/echo-server:functional-219483 --alsologtostderr: (1.16930863s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image ls
2024/08/19 20:33:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image load --daemon kicbase/echo-server:functional-219483 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-219483
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image load --daemon kicbase/echo-server:functional-219483 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-219483 image load --daemon kicbase/echo-server:functional-219483 --alsologtostderr: (1.065302437s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image save kicbase/echo-server:functional-219483 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image rm kicbase/echo-server:functional-219483 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-219483
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-219483 image save --daemon kicbase/echo-server:functional-219483 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-219483
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-219483
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-219483
E0819 20:33:55.905898 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-219483
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (110.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-917932 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 20:34:00.713294 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:34:05.835367 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:34:16.077394 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:34:36.559096 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:35:17.521402 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-917932 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m49.533327997s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (110.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (30.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-917932 -- rollout status deployment/busybox: (27.382706436s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-74w54 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-f6wjh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-hkhgw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-74w54 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-f6wjh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-hkhgw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-74w54 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-f6wjh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-hkhgw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (30.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-74w54 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-74w54 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-f6wjh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-f6wjh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-hkhgw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917932 -- exec busybox-7dff88458-hkhgw -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-917932 -v=7 --alsologtostderr
E0819 20:36:39.443092 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-917932 -v=7 --alsologtostderr: (23.627286919s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-917932 status -v=7 --alsologtostderr: (1.022163005s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-917932 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp testdata/cp-test.txt ha-917932:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2292486957/001/cp-test_ha-917932.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932:/home/docker/cp-test.txt ha-917932-m02:/home/docker/cp-test_ha-917932_ha-917932-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m02 "sudo cat /home/docker/cp-test_ha-917932_ha-917932-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932:/home/docker/cp-test.txt ha-917932-m03:/home/docker/cp-test_ha-917932_ha-917932-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m03 "sudo cat /home/docker/cp-test_ha-917932_ha-917932-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932:/home/docker/cp-test.txt ha-917932-m04:/home/docker/cp-test_ha-917932_ha-917932-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m04 "sudo cat /home/docker/cp-test_ha-917932_ha-917932-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp testdata/cp-test.txt ha-917932-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2292486957/001/cp-test_ha-917932-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932-m02:/home/docker/cp-test.txt ha-917932:/home/docker/cp-test_ha-917932-m02_ha-917932.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932 "sudo cat /home/docker/cp-test_ha-917932-m02_ha-917932.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932-m02:/home/docker/cp-test.txt ha-917932-m03:/home/docker/cp-test_ha-917932-m02_ha-917932-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m03 "sudo cat /home/docker/cp-test_ha-917932-m02_ha-917932-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932-m02:/home/docker/cp-test.txt ha-917932-m04:/home/docker/cp-test_ha-917932-m02_ha-917932-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m04 "sudo cat /home/docker/cp-test_ha-917932-m02_ha-917932-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp testdata/cp-test.txt ha-917932-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2292486957/001/cp-test_ha-917932-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932-m03:/home/docker/cp-test.txt ha-917932:/home/docker/cp-test_ha-917932-m03_ha-917932.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932 "sudo cat /home/docker/cp-test_ha-917932-m03_ha-917932.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932-m03:/home/docker/cp-test.txt ha-917932-m02:/home/docker/cp-test_ha-917932-m03_ha-917932-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m02 "sudo cat /home/docker/cp-test_ha-917932-m03_ha-917932-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932-m03:/home/docker/cp-test.txt ha-917932-m04:/home/docker/cp-test_ha-917932-m03_ha-917932-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m04 "sudo cat /home/docker/cp-test_ha-917932-m03_ha-917932-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp testdata/cp-test.txt ha-917932-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2292486957/001/cp-test_ha-917932-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932-m04:/home/docker/cp-test.txt ha-917932:/home/docker/cp-test_ha-917932-m04_ha-917932.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932 "sudo cat /home/docker/cp-test_ha-917932-m04_ha-917932.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932-m04:/home/docker/cp-test.txt ha-917932-m02:/home/docker/cp-test_ha-917932-m04_ha-917932-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m02 "sudo cat /home/docker/cp-test_ha-917932-m04_ha-917932-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 cp ha-917932-m04:/home/docker/cp-test.txt ha-917932-m03:/home/docker/cp-test_ha-917932-m04_ha-917932-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 ssh -n ha-917932-m03 "sudo cat /home/docker/cp-test_ha-917932-m04_ha-917932-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-917932 node stop m02 -v=7 --alsologtostderr: (12.217301054s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-917932 status -v=7 --alsologtostderr: exit status 7 (766.62174ms)

                                                
                                                
-- stdout --
	ha-917932
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-917932-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-917932-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-917932-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 20:37:18.410670 1197917 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:37:18.410866 1197917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:37:18.410879 1197917 out.go:358] Setting ErrFile to fd 2...
	I0819 20:37:18.410885 1197917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:37:18.411145 1197917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
	I0819 20:37:18.411357 1197917 out.go:352] Setting JSON to false
	I0819 20:37:18.411414 1197917 mustload.go:65] Loading cluster: ha-917932
	I0819 20:37:18.411512 1197917 notify.go:220] Checking for updates...
	I0819 20:37:18.411919 1197917 config.go:182] Loaded profile config "ha-917932": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 20:37:18.411940 1197917 status.go:255] checking status of ha-917932 ...
	I0819 20:37:18.412524 1197917 cli_runner.go:164] Run: docker container inspect ha-917932 --format={{.State.Status}}
	I0819 20:37:18.434086 1197917 status.go:330] ha-917932 host status = "Running" (err=<nil>)
	I0819 20:37:18.434116 1197917 host.go:66] Checking if "ha-917932" exists ...
	I0819 20:37:18.434457 1197917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-917932
	I0819 20:37:18.466652 1197917 host.go:66] Checking if "ha-917932" exists ...
	I0819 20:37:18.467044 1197917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:37:18.467101 1197917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-917932
	I0819 20:37:18.494717 1197917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33948 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/ha-917932/id_rsa Username:docker}
	I0819 20:37:18.589684 1197917 ssh_runner.go:195] Run: systemctl --version
	I0819 20:37:18.595408 1197917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:37:18.608332 1197917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:37:18.669895 1197917 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-19 20:37:18.659598279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:37:18.670494 1197917 kubeconfig.go:125] found "ha-917932" server: "https://192.168.49.254:8443"
	I0819 20:37:18.670526 1197917 api_server.go:166] Checking apiserver status ...
	I0819 20:37:18.670576 1197917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:37:18.688068 1197917 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1501/cgroup
	I0819 20:37:18.698581 1197917 api_server.go:182] apiserver freezer: "7:freezer:/docker/d6ddcebaeb3caa1e47a2dec5c66243ab21634633ec94a5d1a482ca39b12a66d6/kubepods/burstable/podc234b7d4ec39e58a3040dae2d50ef27b/8e31e54e3b38165323ff26faa595e94e83c98e367c6d8ab8677fc9d1931bbc94"
	I0819 20:37:18.698657 1197917 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d6ddcebaeb3caa1e47a2dec5c66243ab21634633ec94a5d1a482ca39b12a66d6/kubepods/burstable/podc234b7d4ec39e58a3040dae2d50ef27b/8e31e54e3b38165323ff26faa595e94e83c98e367c6d8ab8677fc9d1931bbc94/freezer.state
	I0819 20:37:18.707996 1197917 api_server.go:204] freezer state: "THAWED"
	I0819 20:37:18.708035 1197917 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 20:37:18.718677 1197917 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 20:37:18.718714 1197917 status.go:422] ha-917932 apiserver status = Running (err=<nil>)
	I0819 20:37:18.718726 1197917 status.go:257] ha-917932 status: &{Name:ha-917932 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:37:18.718750 1197917 status.go:255] checking status of ha-917932-m02 ...
	I0819 20:37:18.719089 1197917 cli_runner.go:164] Run: docker container inspect ha-917932-m02 --format={{.State.Status}}
	I0819 20:37:18.740074 1197917 status.go:330] ha-917932-m02 host status = "Stopped" (err=<nil>)
	I0819 20:37:18.740112 1197917 status.go:343] host is not running, skipping remaining checks
	I0819 20:37:18.740122 1197917 status.go:257] ha-917932-m02 status: &{Name:ha-917932-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:37:18.740144 1197917 status.go:255] checking status of ha-917932-m03 ...
	I0819 20:37:18.740496 1197917 cli_runner.go:164] Run: docker container inspect ha-917932-m03 --format={{.State.Status}}
	I0819 20:37:18.758597 1197917 status.go:330] ha-917932-m03 host status = "Running" (err=<nil>)
	I0819 20:37:18.758625 1197917 host.go:66] Checking if "ha-917932-m03" exists ...
	I0819 20:37:18.759001 1197917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-917932-m03
	I0819 20:37:18.775839 1197917 host.go:66] Checking if "ha-917932-m03" exists ...
	I0819 20:37:18.776152 1197917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:37:18.776199 1197917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-917932-m03
	I0819 20:37:18.793739 1197917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/ha-917932-m03/id_rsa Username:docker}
	I0819 20:37:18.889704 1197917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:37:18.902889 1197917 kubeconfig.go:125] found "ha-917932" server: "https://192.168.49.254:8443"
	I0819 20:37:18.902924 1197917 api_server.go:166] Checking apiserver status ...
	I0819 20:37:18.902970 1197917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:37:18.914184 1197917 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1429/cgroup
	I0819 20:37:18.924195 1197917 api_server.go:182] apiserver freezer: "7:freezer:/docker/9cacf88749490aba43fb39b93ce2bbc2aaad75e4e884f59fe65af3eaf7b3813f/kubepods/burstable/podbd4490d046a1c5519367bea28abd483c/ab8bbdadd84788123b526822437a4cd213f8116801b7a17f7a6055a7363c9de7"
	I0819 20:37:18.924320 1197917 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9cacf88749490aba43fb39b93ce2bbc2aaad75e4e884f59fe65af3eaf7b3813f/kubepods/burstable/podbd4490d046a1c5519367bea28abd483c/ab8bbdadd84788123b526822437a4cd213f8116801b7a17f7a6055a7363c9de7/freezer.state
	I0819 20:37:18.935563 1197917 api_server.go:204] freezer state: "THAWED"
	I0819 20:37:18.935638 1197917 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 20:37:18.947448 1197917 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 20:37:18.947496 1197917 status.go:422] ha-917932-m03 apiserver status = Running (err=<nil>)
	I0819 20:37:18.947519 1197917 status.go:257] ha-917932-m03 status: &{Name:ha-917932-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:37:18.947547 1197917 status.go:255] checking status of ha-917932-m04 ...
	I0819 20:37:18.947973 1197917 cli_runner.go:164] Run: docker container inspect ha-917932-m04 --format={{.State.Status}}
	I0819 20:37:18.970528 1197917 status.go:330] ha-917932-m04 host status = "Running" (err=<nil>)
	I0819 20:37:18.970560 1197917 host.go:66] Checking if "ha-917932-m04" exists ...
	I0819 20:37:18.970908 1197917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-917932-m04
	I0819 20:37:18.996891 1197917 host.go:66] Checking if "ha-917932-m04" exists ...
	I0819 20:37:18.997232 1197917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:37:18.997324 1197917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-917932-m04
	I0819 20:37:19.017465 1197917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/ha-917932-m04/id_rsa Username:docker}
	I0819 20:37:19.112115 1197917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:37:19.124563 1197917 status.go:257] ha-917932-m04 status: &{Name:ha-917932-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-917932 node start m02 -v=7 --alsologtostderr: (18.003588306s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-917932 status -v=7 --alsologtostderr: (1.083170333s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (143.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-917932 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-917932 -v=7 --alsologtostderr
E0819 20:38:06.930529 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:38:06.937065 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:38:06.948578 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:38:06.970099 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:38:07.011614 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:38:07.093206 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:38:07.254796 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:38:07.576571 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:38:08.218640 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:38:09.500354 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:38:12.062780 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-917932 -v=7 --alsologtostderr: (37.367346393s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-917932 --wait=true -v=7 --alsologtostderr
E0819 20:38:17.184167 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:38:27.426313 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:38:47.907786 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:38:55.579341 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:39:23.284727 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:39:28.869180 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-917932 --wait=true -v=7 --alsologtostderr: (1m46.057392666s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-917932
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (143.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-917932 node delete m03 -v=7 --alsologtostderr: (9.72912544s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-917932 stop -v=7 --alsologtostderr: (36.040197369s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-917932 status -v=7 --alsologtostderr: exit status 7 (124.538468ms)

                                                
                                                
-- stdout --
	ha-917932
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-917932-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-917932-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 20:40:50.610017 1212187 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:40:50.610153 1212187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:40:50.610163 1212187 out.go:358] Setting ErrFile to fd 2...
	I0819 20:40:50.610168 1212187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:40:50.610410 1212187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
	I0819 20:40:50.610608 1212187 out.go:352] Setting JSON to false
	I0819 20:40:50.610656 1212187 mustload.go:65] Loading cluster: ha-917932
	I0819 20:40:50.610761 1212187 notify.go:220] Checking for updates...
	I0819 20:40:50.611120 1212187 config.go:182] Loaded profile config "ha-917932": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 20:40:50.611133 1212187 status.go:255] checking status of ha-917932 ...
	I0819 20:40:50.611661 1212187 cli_runner.go:164] Run: docker container inspect ha-917932 --format={{.State.Status}}
	I0819 20:40:50.639381 1212187 status.go:330] ha-917932 host status = "Stopped" (err=<nil>)
	I0819 20:40:50.639407 1212187 status.go:343] host is not running, skipping remaining checks
	I0819 20:40:50.639415 1212187 status.go:257] ha-917932 status: &{Name:ha-917932 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:40:50.639448 1212187 status.go:255] checking status of ha-917932-m02 ...
	I0819 20:40:50.639791 1212187 cli_runner.go:164] Run: docker container inspect ha-917932-m02 --format={{.State.Status}}
	I0819 20:40:50.666219 1212187 status.go:330] ha-917932-m02 host status = "Stopped" (err=<nil>)
	I0819 20:40:50.666241 1212187 status.go:343] host is not running, skipping remaining checks
	I0819 20:40:50.666248 1212187 status.go:257] ha-917932-m02 status: &{Name:ha-917932-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:40:50.666267 1212187 status.go:255] checking status of ha-917932-m04 ...
	I0819 20:40:50.666586 1212187 cli_runner.go:164] Run: docker container inspect ha-917932-m04 --format={{.State.Status}}
	I0819 20:40:50.684851 1212187 status.go:330] ha-917932-m04 host status = "Stopped" (err=<nil>)
	I0819 20:40:50.684874 1212187 status.go:343] host is not running, skipping remaining checks
	I0819 20:40:50.684894 1212187 status.go:257] ha-917932-m04 status: &{Name:ha-917932-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (79.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-917932 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 20:40:50.791499 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-917932 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.963600471s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (79.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-917932 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-917932 --control-plane -v=7 --alsologtostderr: (40.672879141s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-917932 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-917932 status -v=7 --alsologtostderr: (1.011823131s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-287716 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0819 20:43:06.930808 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:43:34.632813 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:43:55.579895 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-287716 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m0.036188358s)
--- PASS: TestJSONOutput/start/Command (60.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-287716 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-287716 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-287716 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-287716 --output=json --user=testUser: (5.791038011s)
--- PASS: TestJSONOutput/stop/Command (5.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-370095 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-370095 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.6581ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"82270bfe-36ba-4562-bcc4-b5866738254d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-370095] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8dcfe933-008d-4e32-af8d-95a4e54ba25d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"25c52752-009e-4ea2-9589-550f3677aa3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a0d90a91-e075-44e3-9efc-90d98b67a970","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig"}}
	{"specversion":"1.0","id":"8cb91ccd-64bb-414e-b246-b6ff06e1b398","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube"}}
	{"specversion":"1.0","id":"b40bed56-933d-40fd-a963-95494b136d7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5de0734a-914d-48e7-a5e4-31b538cc1955","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b71b46c0-bc67-4c59-a195-3a8fac5f5564","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-370095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-370095
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.83s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-790044 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-790044 --network=: (38.716455455s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-790044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-790044
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-790044: (2.09036155s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.83s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-042455 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-042455 --network=bridge: (32.351157827s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-042455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-042455
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-042455: (2.034100201s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.41s)

                                                
                                    
x
+
TestKicExistingNetwork (35.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-341533 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-341533 --network=existing-network: (33.569612333s)
helpers_test.go:175: Cleaning up "existing-network-341533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-341533
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-341533: (1.922417482s)
--- PASS: TestKicExistingNetwork (35.65s)

                                                
                                    
x
+
TestKicCustomSubnet (33.7s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-365609 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-365609 --subnet=192.168.60.0/24: (31.567109162s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-365609 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-365609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-365609
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-365609: (2.109434661s)
--- PASS: TestKicCustomSubnet (33.70s)

                                                
                                    
x
+
TestKicStaticIP (33.41s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-157297 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-157297 --static-ip=192.168.200.200: (31.158237527s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-157297 ip
helpers_test.go:175: Cleaning up "static-ip-157297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-157297
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-157297: (2.105615091s)
--- PASS: TestKicStaticIP (33.41s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-748267 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-748267 --driver=docker  --container-runtime=containerd: (32.132801273s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-751040 --driver=docker  --container-runtime=containerd
E0819 20:48:06.930730 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-751040 --driver=docker  --container-runtime=containerd: (33.614796653s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-748267
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-751040
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-751040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-751040
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-751040: (2.025925472s)
helpers_test.go:175: Cleaning up "first-748267" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-748267
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-748267: (2.246425947s)
--- PASS: TestMinikubeProfile (71.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-114121 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-114121 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.483512102s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-114121 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-127462 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-127462 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.312147206s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-127462 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-114121 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-114121 --alsologtostderr -v=5: (1.637048875s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-127462 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-127462
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-127462: (1.204626671s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-127462
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-127462: (6.871860889s)
--- PASS: TestMountStart/serial/RestartStopped (7.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-127462 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-635562 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 20:48:55.580000 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-635562 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m5.344919211s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (15.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-635562 -- rollout status deployment/busybox: (13.180138141s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- exec busybox-7dff88458-28l5q -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- exec busybox-7dff88458-mxnlx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- exec busybox-7dff88458-28l5q -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- exec busybox-7dff88458-mxnlx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- exec busybox-7dff88458-28l5q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- exec busybox-7dff88458-mxnlx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (15.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- exec busybox-7dff88458-28l5q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- exec busybox-7dff88458-28l5q -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- exec busybox-7dff88458-mxnlx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-635562 -- exec busybox-7dff88458-mxnlx -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-635562 -v 3 --alsologtostderr
E0819 20:50:18.646653 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-635562 -v 3 --alsologtostderr: (15.6056185s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-635562 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 cp testdata/cp-test.txt multinode-635562:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 cp multinode-635562:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile405032728/001/cp-test_multinode-635562.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 cp multinode-635562:/home/docker/cp-test.txt multinode-635562-m02:/home/docker/cp-test_multinode-635562_multinode-635562-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562-m02 "sudo cat /home/docker/cp-test_multinode-635562_multinode-635562-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 cp multinode-635562:/home/docker/cp-test.txt multinode-635562-m03:/home/docker/cp-test_multinode-635562_multinode-635562-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562-m03 "sudo cat /home/docker/cp-test_multinode-635562_multinode-635562-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 cp testdata/cp-test.txt multinode-635562-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 cp multinode-635562-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile405032728/001/cp-test_multinode-635562-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 cp multinode-635562-m02:/home/docker/cp-test.txt multinode-635562:/home/docker/cp-test_multinode-635562-m02_multinode-635562.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562 "sudo cat /home/docker/cp-test_multinode-635562-m02_multinode-635562.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 cp multinode-635562-m02:/home/docker/cp-test.txt multinode-635562-m03:/home/docker/cp-test_multinode-635562-m02_multinode-635562-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562-m03 "sudo cat /home/docker/cp-test_multinode-635562-m02_multinode-635562-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 cp testdata/cp-test.txt multinode-635562-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 cp multinode-635562-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile405032728/001/cp-test_multinode-635562-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 cp multinode-635562-m03:/home/docker/cp-test.txt multinode-635562:/home/docker/cp-test_multinode-635562-m03_multinode-635562.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562 "sudo cat /home/docker/cp-test_multinode-635562-m03_multinode-635562.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 cp multinode-635562-m03:/home/docker/cp-test.txt multinode-635562-m02:/home/docker/cp-test_multinode-635562-m03_multinode-635562-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 ssh -n multinode-635562-m02 "sudo cat /home/docker/cp-test_multinode-635562-m03_multinode-635562-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-635562 node stop m03: (1.209248813s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-635562 status: exit status 7 (510.171612ms)

                                                
                                                
-- stdout --
	multinode-635562
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-635562-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-635562-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-635562 status --alsologtostderr: exit status 7 (525.685238ms)

                                                
                                                
-- stdout --
	multinode-635562
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-635562-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-635562-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 20:50:41.331294 1265776 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:50:41.331491 1265776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:50:41.331518 1265776 out.go:358] Setting ErrFile to fd 2...
	I0819 20:50:41.331537 1265776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:50:41.331855 1265776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
	I0819 20:50:41.332086 1265776 out.go:352] Setting JSON to false
	I0819 20:50:41.332150 1265776 mustload.go:65] Loading cluster: multinode-635562
	I0819 20:50:41.332275 1265776 notify.go:220] Checking for updates...
	I0819 20:50:41.332697 1265776 config.go:182] Loaded profile config "multinode-635562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 20:50:41.332734 1265776 status.go:255] checking status of multinode-635562 ...
	I0819 20:50:41.333292 1265776 cli_runner.go:164] Run: docker container inspect multinode-635562 --format={{.State.Status}}
	I0819 20:50:41.353478 1265776 status.go:330] multinode-635562 host status = "Running" (err=<nil>)
	I0819 20:50:41.353508 1265776 host.go:66] Checking if "multinode-635562" exists ...
	I0819 20:50:41.353833 1265776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-635562
	I0819 20:50:41.379285 1265776 host.go:66] Checking if "multinode-635562" exists ...
	I0819 20:50:41.379684 1265776 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:50:41.379757 1265776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-635562
	I0819 20:50:41.401814 1265776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/multinode-635562/id_rsa Username:docker}
	I0819 20:50:41.504943 1265776 ssh_runner.go:195] Run: systemctl --version
	I0819 20:50:41.511290 1265776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:50:41.528752 1265776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:50:41.585950 1265776 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-19 20:50:41.575111601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:50:41.586546 1265776 kubeconfig.go:125] found "multinode-635562" server: "https://192.168.58.2:8443"
	I0819 20:50:41.586583 1265776 api_server.go:166] Checking apiserver status ...
	I0819 20:50:41.586625 1265776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:50:41.598526 1265776 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1437/cgroup
	I0819 20:50:41.608754 1265776 api_server.go:182] apiserver freezer: "7:freezer:/docker/0cdeee09fbc1cc65ae18f933378358cd3b372083e6b5d1db99db057c6be60638/kubepods/burstable/podac40ef775d30eebf8363a7427bb0c4a6/d5763578611f1e174335bee46b7383ca2c0f1b176bb5255221a719f084142aaa"
	I0819 20:50:41.608827 1265776 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0cdeee09fbc1cc65ae18f933378358cd3b372083e6b5d1db99db057c6be60638/kubepods/burstable/podac40ef775d30eebf8363a7427bb0c4a6/d5763578611f1e174335bee46b7383ca2c0f1b176bb5255221a719f084142aaa/freezer.state
	I0819 20:50:41.618233 1265776 api_server.go:204] freezer state: "THAWED"
	I0819 20:50:41.618263 1265776 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0819 20:50:41.626285 1265776 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0819 20:50:41.626317 1265776 status.go:422] multinode-635562 apiserver status = Running (err=<nil>)
	I0819 20:50:41.626329 1265776 status.go:257] multinode-635562 status: &{Name:multinode-635562 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:50:41.626347 1265776 status.go:255] checking status of multinode-635562-m02 ...
	I0819 20:50:41.626703 1265776 cli_runner.go:164] Run: docker container inspect multinode-635562-m02 --format={{.State.Status}}
	I0819 20:50:41.644816 1265776 status.go:330] multinode-635562-m02 host status = "Running" (err=<nil>)
	I0819 20:50:41.644843 1265776 host.go:66] Checking if "multinode-635562-m02" exists ...
	I0819 20:50:41.645160 1265776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-635562-m02
	I0819 20:50:41.662555 1265776 host.go:66] Checking if "multinode-635562-m02" exists ...
	I0819 20:50:41.662906 1265776 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:50:41.662958 1265776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-635562-m02
	I0819 20:50:41.679645 1265776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34073 SSHKeyPath:/home/jenkins/minikube-integration/19423-1139612/.minikube/machines/multinode-635562-m02/id_rsa Username:docker}
	I0819 20:50:41.769374 1265776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:50:41.781393 1265776 status.go:257] multinode-635562-m02 status: &{Name:multinode-635562-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:50:41.781431 1265776 status.go:255] checking status of multinode-635562-m03 ...
	I0819 20:50:41.781750 1265776 cli_runner.go:164] Run: docker container inspect multinode-635562-m03 --format={{.State.Status}}
	I0819 20:50:41.799475 1265776 status.go:330] multinode-635562-m03 host status = "Stopped" (err=<nil>)
	I0819 20:50:41.799499 1265776 status.go:343] host is not running, skipping remaining checks
	I0819 20:50:41.799508 1265776 status.go:257] multinode-635562-m03 status: &{Name:multinode-635562-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-635562 node start m03 -v=7 --alsologtostderr: (8.738567325s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (91.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-635562
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-635562
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-635562: (25.003197193s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-635562 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-635562 --wait=true -v=8 --alsologtostderr: (1m6.610702018s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-635562
--- PASS: TestMultiNode/serial/RestartKeepsNodes (91.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-635562 node delete m03: (4.861329956s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-635562 stop: (23.862838309s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-635562 status: exit status 7 (89.086134ms)

                                                
                                                
-- stdout --
	multinode-635562
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-635562-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-635562 status --alsologtostderr: exit status 7 (93.327164ms)

                                                
                                                
-- stdout --
	multinode-635562
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-635562-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 20:52:52.567391 1274224 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:52:52.567573 1274224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:52:52.567586 1274224 out.go:358] Setting ErrFile to fd 2...
	I0819 20:52:52.567592 1274224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:52:52.567872 1274224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
	I0819 20:52:52.568087 1274224 out.go:352] Setting JSON to false
	I0819 20:52:52.568144 1274224 mustload.go:65] Loading cluster: multinode-635562
	I0819 20:52:52.568272 1274224 notify.go:220] Checking for updates...
	I0819 20:52:52.568627 1274224 config.go:182] Loaded profile config "multinode-635562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 20:52:52.568647 1274224 status.go:255] checking status of multinode-635562 ...
	I0819 20:52:52.569185 1274224 cli_runner.go:164] Run: docker container inspect multinode-635562 --format={{.State.Status}}
	I0819 20:52:52.587556 1274224 status.go:330] multinode-635562 host status = "Stopped" (err=<nil>)
	I0819 20:52:52.587579 1274224 status.go:343] host is not running, skipping remaining checks
	I0819 20:52:52.587587 1274224 status.go:257] multinode-635562 status: &{Name:multinode-635562 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:52:52.587612 1274224 status.go:255] checking status of multinode-635562-m02 ...
	I0819 20:52:52.587956 1274224 cli_runner.go:164] Run: docker container inspect multinode-635562-m02 --format={{.State.Status}}
	I0819 20:52:52.610823 1274224 status.go:330] multinode-635562-m02 host status = "Stopped" (err=<nil>)
	I0819 20:52:52.610845 1274224 status.go:343] host is not running, skipping remaining checks
	I0819 20:52:52.610858 1274224 status.go:257] multinode-635562-m02 status: &{Name:multinode-635562-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-635562 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 20:53:06.929937 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-635562 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (46.9892499s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-635562 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.65s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-635562
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-635562-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-635562-m02 --driver=docker  --container-runtime=containerd: exit status 14 (93.108411ms)

                                                
                                                
-- stdout --
	* [multinode-635562-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-635562-m02' is duplicated with machine name 'multinode-635562-m02' in profile 'multinode-635562'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-635562-m03 --driver=docker  --container-runtime=containerd
E0819 20:53:55.579919 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-635562-m03 --driver=docker  --container-runtime=containerd: (31.883214293s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-635562
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-635562: exit status 80 (313.51641ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-635562 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-635562-m03 already exists in multinode-635562-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-635562-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-635562-m03: (1.939800308s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.28s)

                                                
                                    
x
+
TestPreload (113.66s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-412927 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0819 20:54:29.994700 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-412927 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m13.897937289s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-412927 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-412927 image pull gcr.io/k8s-minikube/busybox: (1.190252947s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-412927
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-412927: (12.067637016s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-412927 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-412927 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (23.459807802s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-412927 image list
helpers_test.go:175: Cleaning up "test-preload-412927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-412927
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-412927: (2.75601002s)
--- PASS: TestPreload (113.66s)

                                                
                                    
x
+
TestScheduledStopUnix (110.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-079900 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-079900 --memory=2048 --driver=docker  --container-runtime=containerd: (34.920317816s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-079900 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-079900 -n scheduled-stop-079900
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-079900 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-079900 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-079900 -n scheduled-stop-079900
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-079900
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-079900 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-079900
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-079900: exit status 7 (67.376746ms)

                                                
                                                
-- stdout --
	scheduled-stop-079900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-079900 -n scheduled-stop-079900
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-079900 -n scheduled-stop-079900: exit status 7 (69.893478ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-079900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-079900
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-079900: (3.785515912s)
--- PASS: TestScheduledStopUnix (110.22s)

                                                
                                    
x
+
TestInsufficientStorage (10.46s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-649240 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E0819 20:58:06.930728 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-649240 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.027086479s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a10d2cf3-fcba-45f8-9051-5faab330e1de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-649240] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ea632686-52a0-46da-a42f-05dd71f91349","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"441dd19e-c341-4712-bcd3-cc6d4912497d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9fb8020d-8c05-4e68-af7e-dfab856132cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig"}}
	{"specversion":"1.0","id":"2b0a18e8-fd90-4dd4-b287-48e52c6189f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube"}}
	{"specversion":"1.0","id":"e597bca0-4f86-403c-95eb-37643f2714fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7bebb4fd-ade1-44a8-a91d-d43a4dc79553","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a9b45672-a37f-4447-b37c-a07ec66e221e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"272e34bc-97da-4a45-8a54-73fb93b6e972","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ead81162-5223-48e7-9504-5635ea7fd895","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4562db66-5176-462c-9c6b-d003e2d0e6d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d2fc28e0-9269-4220-934a-dbaacd35084e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-649240\" primary control-plane node in \"insufficient-storage-649240\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8474b01-efc3-49e5-8ee9-060a068c8cf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723740748-19452 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7cf2968d-8a6d-4bea-b598-1c6d28c8c630","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb4e7f65-e7ef-4bf1-b08a-81eb53295bbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-649240 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-649240 --output=json --layout=cluster: exit status 7 (293.718471ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-649240","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-649240","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 20:58:10.687151 1292670 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-649240" does not appear in /home/jenkins/minikube-integration/19423-1139612/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-649240 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-649240 --output=json --layout=cluster: exit status 7 (283.362052ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-649240","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-649240","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 20:58:10.972355 1292730 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-649240" does not appear in /home/jenkins/minikube-integration/19423-1139612/kubeconfig
	E0819 20:58:10.982816 1292730 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/insufficient-storage-649240/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-649240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-649240
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-649240: (1.85292182s)
--- PASS: TestInsufficientStorage (10.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (100.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2249397714 start -p running-upgrade-532793 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0819 21:03:55.580257 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2249397714 start -p running-upgrade-532793 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.904367081s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-532793 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-532793 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (48.594930625s)
helpers_test.go:175: Cleaning up "running-upgrade-532793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-532793
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-532793: (3.286324992s)
--- PASS: TestRunningBinaryUpgrade (100.51s)

                                                
                                    
x
+
TestKubernetesUpgrade (352.29s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-243774 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-243774 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.869110008s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-243774
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-243774: (1.290615621s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-243774 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-243774 status --format={{.Host}}: exit status 7 (102.161958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-243774 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-243774 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m36.452827365s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-243774 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-243774 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-243774 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (129.288899ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-243774] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-243774
	    minikube start -p kubernetes-upgrade-243774 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2437742 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-243774 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-243774 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-243774 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.517574033s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-243774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-243774
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-243774: (2.725499969s)
--- PASS: TestKubernetesUpgrade (352.29s)

                                                
                                    
x
+
TestMissingContainerUpgrade (152.08s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2077549064 start -p missing-upgrade-352723 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2077549064 start -p missing-upgrade-352723 --memory=2200 --driver=docker  --container-runtime=containerd: (1m13.752534068s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-352723
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-352723: (10.275349623s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-352723
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-352723 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-352723 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m3.96569833s)
helpers_test.go:175: Cleaning up "missing-upgrade-352723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-352723
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-352723: (2.433333104s)
--- PASS: TestMissingContainerUpgrade (152.08s)

                                                
                                    
x
+
TestPause/serial/Start (58.71s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-910226 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-910226 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (58.709293489s)
--- PASS: TestPause/serial/Start (58.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-610993 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-610993 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (106.849181ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-610993] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-610993 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-610993 --driver=docker  --container-runtime=containerd: (41.09815488s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-610993 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-610993 --no-kubernetes --driver=docker  --container-runtime=containerd
E0819 20:58:55.579666 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-610993 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.453869907s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-610993 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-610993 status -o json: exit status 2 (339.909323ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-610993","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-610993
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-610993: (2.092164073s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.89s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-910226 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-910226 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.452107328s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-610993 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-610993 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.569886402s)
--- PASS: TestNoKubernetes/serial/Start (9.57s)

                                                
                                    
x
+
TestPause/serial/Pause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-910226 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-910226 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-910226 --output=json --layout=cluster: exit status 2 (398.32603ms)

                                                
                                                
-- stdout --
	{"Name":"pause-910226","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-910226","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-910226 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.87s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.17s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-910226 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-910226 --alsologtostderr -v=5: (1.164987003s)
--- PASS: TestPause/serial/PauseAgain (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-610993 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-610993 "sudo systemctl is-active --quiet service kubelet": exit status 1 (339.199547ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (2.358412351s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.92s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-910226 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-910226 --alsologtostderr -v=5: (2.853094235s)
--- PASS: TestPause/serial/DeletePaused (2.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-610993
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-610993: (1.262890184s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.81s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (2.760932446s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-910226
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-910226: exit status 1 (15.613662ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-910226: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-610993 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-610993 --driver=docker  --container-runtime=containerd: (6.884179292s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-610993 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-610993 "sudo systemctl is-active --quiet service kubelet": exit status 1 (343.452408ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2776592157 start -p stopped-upgrade-777509 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2776592157 start -p stopped-upgrade-777509 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.480932407s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2776592157 -p stopped-upgrade-777509 stop
E0819 21:03:06.933860 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2776592157 -p stopped-upgrade-777509 stop: (20.024333898s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-777509 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-777509 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.837866133s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (107.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-777509
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-777509: (1.171985208s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-056687 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-056687 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (365.76447ms)

                                                
                                                
-- stdout --
	* [false-056687] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 21:05:38.151211 1331854 out.go:345] Setting OutFile to fd 1 ...
	I0819 21:05:38.151366 1331854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 21:05:38.151393 1331854 out.go:358] Setting ErrFile to fd 2...
	I0819 21:05:38.151411 1331854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 21:05:38.151690 1331854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1139612/.minikube/bin
	I0819 21:05:38.152193 1331854 out.go:352] Setting JSON to false
	I0819 21:05:38.153244 1331854 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17285,"bootTime":1724084253,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 21:05:38.153324 1331854 start.go:139] virtualization:  
	I0819 21:05:38.155941 1331854 out.go:177] * [false-056687] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 21:05:38.157895 1331854 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 21:05:38.158068 1331854 notify.go:220] Checking for updates...
	I0819 21:05:38.160686 1331854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 21:05:38.162037 1331854 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1139612/kubeconfig
	I0819 21:05:38.163506 1331854 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1139612/.minikube
	I0819 21:05:38.164691 1331854 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 21:05:38.165941 1331854 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 21:05:38.167731 1331854 config.go:182] Loaded profile config "force-systemd-flag-906634": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 21:05:38.167906 1331854 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 21:05:38.217728 1331854 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 21:05:38.217856 1331854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 21:05:38.439648 1331854 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:59 SystemTime:2024-08-19 21:05:38.40296637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 21:05:38.439762 1331854 docker.go:307] overlay module found
	I0819 21:05:38.443983 1331854 out.go:177] * Using the docker driver based on user configuration
	I0819 21:05:38.446229 1331854 start.go:297] selected driver: docker
	I0819 21:05:38.446247 1331854 start.go:901] validating driver "docker" against <nil>
	I0819 21:05:38.446262 1331854 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 21:05:38.449410 1331854 out.go:201] 
	W0819 21:05:38.450560 1331854 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0819 21:05:38.451604 1331854 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-056687 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-056687

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-056687

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-056687

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-056687

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-056687

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-056687

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-056687

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-056687

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-056687

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-056687

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-056687

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-056687" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-056687" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-056687

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056687"

                                                
                                                
----------------------- debugLogs end: false-056687 [took: 4.527566844s] --------------------------------
helpers_test.go:175: Cleaning up "false-056687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-056687
--- PASS: TestNetworkPlugins/group/false (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (177.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-127648 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0819 21:08:06.929929 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:08:55.580186 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-127648 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m57.329024576s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (177.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-785099 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-785099 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m15.525620311s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-127648 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7d9be910-0f27-4ad6-8132-8e477229dab9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7d9be910-0f27-4ad6-8132-8e477229dab9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004544263s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-127648 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-127648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-127648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.557902905s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-127648 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-127648 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-127648 --alsologtostderr -v=3: (12.854440401s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-127648 -n old-k8s-version-127648
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-127648 -n old-k8s-version-127648: exit status 7 (121.518099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-127648 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-785099 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6aeab362-bb80-4603-800f-7aaed850254f] Pending
helpers_test.go:344: "busybox" [6aeab362-bb80-4603-800f-7aaed850254f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6aeab362-bb80-4603-800f-7aaed850254f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004845447s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-785099 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-785099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-785099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.058485575s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-785099 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-785099 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-785099 --alsologtostderr -v=3: (12.134424715s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-785099 -n no-preload-785099
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-785099 -n no-preload-785099: exit status 7 (69.281978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-785099 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-785099 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 21:13:06.930839 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:13:55.580120 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-785099 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m27.040570812s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-785099 -n no-preload-785099
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-h2nbw" [00028a79-23e8-47dc-ab20-6177aafec6c2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003269239s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-h2nbw" [00028a79-23e8-47dc-ab20-6177aafec6c2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004493586s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-785099 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-785099 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-785099 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-785099 -n no-preload-785099
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-785099 -n no-preload-785099: exit status 2 (317.510968ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-785099 -n no-preload-785099
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-785099 -n no-preload-785099: exit status 2 (328.941937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-785099 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-785099 -n no-preload-785099
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-785099 -n no-preload-785099
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-249735 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-249735 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m5.507983565s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-45zvk" [57907d73-5f1f-4787-a3b5-9a2f7697b904] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.016105053s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-45zvk" [57907d73-5f1f-4787-a3b5-9a2f7697b904] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004874669s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-127648 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-127648 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-127648 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-127648 -n old-k8s-version-127648
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-127648 -n old-k8s-version-127648: exit status 2 (333.874683ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-127648 -n old-k8s-version-127648
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-127648 -n old-k8s-version-127648: exit status 2 (328.599381ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-127648 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-127648 -n old-k8s-version-127648
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-127648 -n old-k8s-version-127648
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-645275 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-645275 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (54.662701942s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-249735 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2fcf8cad-cde5-4729-9763-9ea16a5179e5] Pending
helpers_test.go:344: "busybox" [2fcf8cad-cde5-4729-9763-9ea16a5179e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2fcf8cad-cde5-4729-9763-9ea16a5179e5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003084424s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-249735 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-249735 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-249735 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.393783783s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-249735 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-249735 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-249735 --alsologtostderr -v=3: (12.559679509s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-249735 -n embed-certs-249735
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-249735 -n embed-certs-249735: exit status 7 (190.370368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-249735 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-249735 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 21:18:06.930058 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-249735 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m27.142986342s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-249735 -n embed-certs-249735
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-645275 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [999bb651-de4e-4e0a-a9f9-d7066aaf0196] Pending
helpers_test.go:344: "busybox" [999bb651-de4e-4e0a-a9f9-d7066aaf0196] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [999bb651-de4e-4e0a-a9f9-d7066aaf0196] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005972358s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-645275 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-645275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-645275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.06847434s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-645275 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-645275 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-645275 --alsologtostderr -v=3: (12.038863787s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-645275 -n default-k8s-diff-port-645275
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-645275 -n default-k8s-diff-port-645275: exit status 7 (72.124838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-645275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-645275 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 21:18:55.580011 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:20:08.430466 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:20:08.436796 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:20:08.448203 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:20:08.469749 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:20:08.511251 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:20:08.592818 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:20:08.754387 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:20:09.076160 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:20:09.717980 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:20:10.999568 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:20:13.561016 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:20:18.682549 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:20:28.924696 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:20:49.406176 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:16.887897 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:16.894693 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:16.906274 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:16.927740 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:16.969351 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:17.050816 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:17.213087 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:17.534817 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:18.176739 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:19.458072 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:22.019690 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:27.141345 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:30.367781 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:37.383459 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:21:57.865402 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-645275 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m27.753493681s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-645275 -n default-k8s-diff-port-645275
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xxjgf" [b62a9552-c793-43e5-b2ce-b38ae96c05a4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003824835s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xxjgf" [b62a9552-c793-43e5-b2ce-b38ae96c05a4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004272368s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-249735 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-249735 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-249735 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-249735 -n embed-certs-249735
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-249735 -n embed-certs-249735: exit status 2 (318.556554ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-249735 -n embed-certs-249735
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-249735 -n embed-certs-249735: exit status 2 (316.177395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-249735 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-249735 -n embed-certs-249735
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-249735 -n embed-certs-249735
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-480780 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 21:22:38.827640 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:22:52.289649 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-480780 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (38.058674736s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-66xn5" [6102430b-b79c-4b17-bcb0-802c5dd9c24e] Running
E0819 21:23:06.930263 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005393568s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-66xn5" [6102430b-b79c-4b17-bcb0-802c5dd9c24e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004379958s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-645275 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-645275 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-645275 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-645275 --alsologtostderr -v=1: (1.198026778s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-645275 -n default-k8s-diff-port-645275
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-645275 -n default-k8s-diff-port-645275: exit status 2 (496.217099ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-645275 -n default-k8s-diff-port-645275
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-645275 -n default-k8s-diff-port-645275: exit status 2 (480.744185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-645275 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-645275 --alsologtostderr -v=1: (1.344158402s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-645275 -n default-k8s-diff-port-645275
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-645275 -n default-k8s-diff-port-645275
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-480780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-480780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.17917425s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-480780 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-480780 --alsologtostderr -v=3: (1.407801542s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-480780 -n newest-cni-480780
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-480780 -n newest-cni-480780: exit status 7 (79.691359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-480780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-480780 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-480780 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (19.140408925s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-480780 -n newest-cni-480780
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-056687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0819 21:23:38.650622 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-056687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m1.664289187s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-480780 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-480780 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-480780 -n newest-cni-480780
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-480780 -n newest-cni-480780: exit status 2 (412.396216ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-480780 -n newest-cni-480780
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-480780 -n newest-cni-480780: exit status 2 (387.358305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-480780 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-480780 -n newest-cni-480780
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-480780 -n newest-cni-480780
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.50s)
E0819 21:28:51.037987 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/default-k8s-diff-port-645275/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:28:55.579913 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:23.574513 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/auto-056687/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:23.580999 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/auto-056687/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:23.592364 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/auto-056687/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:23.613723 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/auto-056687/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:23.655096 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/auto-056687/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:23.736579 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/auto-056687/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:23.898172 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/auto-056687/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:24.219602 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/auto-056687/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:24.861461 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/auto-056687/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:26.143519 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/auto-056687/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:28.705146 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/auto-056687/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:32.000000 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/default-k8s-diff-port-645275/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:33.827243 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/auto-056687/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-056687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0819 21:23:55.579968 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/addons-069800/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:24:00.749528 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-056687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m3.828972779s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-056687 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-056687 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8n5vm" [50e1ed9a-c6c4-45c3-8ddd-c3eb3804635b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8n5vm" [50e1ed9a-c6c4-45c3-8ddd-c3eb3804635b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003502897s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-056687 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-056687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-056687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8q2rg" [2bb770ad-a2a9-40a6-85a4-29fedc7ea7ec] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003835014s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-056687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-056687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m12.683190816s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-056687 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-056687 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5bcst" [ab021c06-ef2e-4e4c-9dfe-f9b93f97cf31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5bcst" [ab021c06-ef2e-4e4c-9dfe-f9b93f97cf31] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.007006231s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-056687 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-056687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-056687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-056687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0819 21:25:36.130921 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/old-k8s-version-127648/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-056687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (53.800681231s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nqxwp" [30341f46-5b7b-4472-b3e2-c9f9727cd43c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005648315s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-056687 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-056687 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9gqf6" [9f68b6df-6181-40ee-9e59-1656a291f8f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 21:26:16.887782 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/no-preload-785099/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-9gqf6" [9f68b6df-6181-40ee-9e59-1656a291f8f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003804585s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-056687 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-056687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-056687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-056687 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-056687 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tqbdn" [4d737a88-3b6e-4692-a004-d185f0b497c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tqbdn" [4d737a88-3b6e-4692-a004-d185f0b497c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004444694s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-056687 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-056687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-056687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (51.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-056687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-056687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (51.458462362s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (51.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-056687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-056687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (57.149771389s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-056687 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-056687 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5wfcg" [22399f4b-d92b-4e49-ac19-4f1b703938ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5wfcg" [22399f4b-d92b-4e49-ac19-4f1b703938ea] Running
E0819 21:27:49.999138 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004038s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-056687 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-056687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-056687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-98cf6" [e6e0d10e-b3a0-452c-8329-79dd29e5a229] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004562207s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-056687 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-056687 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8c4ws" [9fa95d85-b2a8-42fa-b407-4bb8bf92c704] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 21:28:06.930265 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/functional-219483/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-8c4ws" [9fa95d85-b2a8-42fa-b407-4bb8bf92c704] Running
E0819 21:28:10.060818 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/default-k8s-diff-port-645275/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:28:10.067610 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/default-k8s-diff-port-645275/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:28:10.078971 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/default-k8s-diff-port-645275/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:28:10.100357 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/default-k8s-diff-port-645275/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:28:10.141746 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/default-k8s-diff-port-645275/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:28:10.223085 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/default-k8s-diff-port-645275/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.005829317s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-056687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-056687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m21.576937658s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-056687 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-056687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-056687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-056687 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-056687 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mjq7d" [912dc089-6839-4066-85b8-047a9929e910] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mjq7d" [912dc089-6839-4066-85b8-047a9929e910] Running
E0819 21:29:44.069482 1145018 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1139612/.minikube/profiles/auto-056687/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004475301s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-056687 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-056687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-056687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-237579 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-237579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-237579
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0.01s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.01s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-306211" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-306211
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-056687 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-056687

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-056687

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-056687

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-056687

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-056687

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-056687

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-056687

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-056687

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-056687

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-056687

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-056687

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-056687" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-056687" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-056687

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056687"

                                                
                                                
----------------------- debugLogs end: kubenet-056687 [took: 5.426966637s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-056687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-056687
--- SKIP: TestNetworkPlugins/group/kubenet (5.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-056687 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-056687" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-056687

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-056687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056687"

                                                
                                                
----------------------- debugLogs end: cilium-056687 [took: 4.773090523s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-056687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-056687
--- SKIP: TestNetworkPlugins/group/cilium (4.95s)

                                                
                                    
Copied to clipboard