Test Report: Docker_Linux_containerd_arm64 18943

                    
                      a95fbdf9550db8c431fa5a4c330192118acd2cbf:2024-08-31:36027
                    
                

Test fail (2/338)

Order failed test Duration
29 TestAddons/serial/Volcano 199.82
314 TestStartStop/group/old-k8s-version/serial/SecondStart 373.41
x
+
TestAddons/serial/Volcano (199.82s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 42.338511ms
addons_test.go:897: volcano-scheduler stabilized in 43.118446ms
addons_test.go:905: volcano-admission stabilized in 43.162516ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-scheduler-576bc46687-g444f" [20fec7ba-f35f-471f-86fa-6b709e57d51e] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004191128s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-admission-77d7d48b68-pg6tc" [0f8282d1-81cd-4a69-a70e-a638d849b6f1] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003367714s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-controllers-56675bb4d5-qbpd8" [91571fa0-a087-4adf-ad0c-c2a91343f2a4] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003770477s
addons_test.go:932: (dbg) Run:  kubectl --context addons-516593 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-516593 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-516593 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:345: "test-job-nginx-0" [62a307ec-3e3b-4544-9b66-1c10e91292aa] Pending
helpers_test.go:345: "test-job-nginx-0" [62a307ec-3e3b-4544-9b66-1c10e91292aa] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:330: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-516593 -n addons-516593
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-31 22:27:40.725716747 +0000 UTC m=+432.219549250
addons_test.go:964: (dbg) Run:  kubectl --context addons-516593 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-516593 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-090c7072-559e-448a-8572-7ddb11e1742d
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tvlmw (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-tvlmw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-516593 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-516593 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-516593
helpers_test.go:236: (dbg) docker inspect addons-516593:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "55442c27012e1bce923b01f92cec52fd9596afbb7a0bba17a3807b6cadb2359e",
	        "Created": "2024-08-31T22:21:11.101132863Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1168043,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-31T22:21:11.229272981Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:eb620c1d7126103417d4dc31eb6aaaf95b0878713d0303a36cb77002c31b0deb",
	        "ResolvConfPath": "/var/lib/docker/containers/55442c27012e1bce923b01f92cec52fd9596afbb7a0bba17a3807b6cadb2359e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/55442c27012e1bce923b01f92cec52fd9596afbb7a0bba17a3807b6cadb2359e/hostname",
	        "HostsPath": "/var/lib/docker/containers/55442c27012e1bce923b01f92cec52fd9596afbb7a0bba17a3807b6cadb2359e/hosts",
	        "LogPath": "/var/lib/docker/containers/55442c27012e1bce923b01f92cec52fd9596afbb7a0bba17a3807b6cadb2359e/55442c27012e1bce923b01f92cec52fd9596afbb7a0bba17a3807b6cadb2359e-json.log",
	        "Name": "/addons-516593",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-516593:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-516593",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d1ebfd34eb93f3b2f8dac74c7aeb47ba45083ef98a2b2e194d24cad01909190d-init/diff:/var/lib/docker/overlay2/e3c84f94aefed91511672b053b6e522f115b49b6c1ddbd2cec747cd29cd10f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d1ebfd34eb93f3b2f8dac74c7aeb47ba45083ef98a2b2e194d24cad01909190d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d1ebfd34eb93f3b2f8dac74c7aeb47ba45083ef98a2b2e194d24cad01909190d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d1ebfd34eb93f3b2f8dac74c7aeb47ba45083ef98a2b2e194d24cad01909190d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-516593",
	                "Source": "/var/lib/docker/volumes/addons-516593/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-516593",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-516593",
	                "name.minikube.sigs.k8s.io": "addons-516593",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a0141f187eeac7cd7aaa8f0dfbcd875c40516bd95922af23124ea3e527fe46f",
	            "SandboxKey": "/var/run/docker/netns/7a0141f187ee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34249"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34250"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34253"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34251"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34252"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-516593": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d3603625f6e8ab549785d0a85e03af9da7c8c5abc287439dc3469680764b1fcb",
	                    "EndpointID": "4822c4f3cfd34a9f446ad146a0c35438165d198d0ad013e3ade6b25919411777",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-516593",
	                        "55442c27012e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-516593 -n addons-516593
helpers_test.go:245: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-516593 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p addons-516593 logs -n 25: (1.607539289s)
helpers_test.go:253: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-628848   | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC |                     |
	|         | -p download-only-628848              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC | 31 Aug 24 22:20 UTC |
	| delete  | -p download-only-628848              | download-only-628848   | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC | 31 Aug 24 22:20 UTC |
	| start   | -o=json --download-only              | download-only-610624   | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC |                     |
	|         | -p download-only-610624              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC | 31 Aug 24 22:20 UTC |
	| delete  | -p download-only-610624              | download-only-610624   | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC | 31 Aug 24 22:20 UTC |
	| delete  | -p download-only-628848              | download-only-628848   | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC | 31 Aug 24 22:20 UTC |
	| delete  | -p download-only-610624              | download-only-610624   | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC | 31 Aug 24 22:20 UTC |
	| start   | --download-only -p                   | download-docker-630217 | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC |                     |
	|         | download-docker-630217               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-630217            | download-docker-630217 | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC | 31 Aug 24 22:20 UTC |
	| start   | --download-only -p                   | binary-mirror-741122   | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC |                     |
	|         | binary-mirror-741122                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34335               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-741122              | binary-mirror-741122   | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC | 31 Aug 24 22:20 UTC |
	| addons  | enable dashboard -p                  | addons-516593          | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC |                     |
	|         | addons-516593                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-516593          | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC |                     |
	|         | addons-516593                        |                        |         |         |                     |                     |
	| start   | -p addons-516593 --wait=true         | addons-516593          | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC | 31 Aug 24 22:24 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:20:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:20:45.862534 1167551 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:20:45.862713 1167551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:20:45.862734 1167551 out.go:358] Setting ErrFile to fd 2...
	I0831 22:20:45.862763 1167551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:20:45.863019 1167551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
	I0831 22:20:45.863562 1167551 out.go:352] Setting JSON to false
	I0831 22:20:45.864444 1167551 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21795,"bootTime":1725121051,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0831 22:20:45.864547 1167551 start.go:139] virtualization:  
	I0831 22:20:45.867374 1167551 out.go:177] * [addons-516593] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 22:20:45.869885 1167551 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:20:45.869958 1167551 notify.go:220] Checking for updates...
	I0831 22:20:45.873702 1167551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:20:45.875735 1167551 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	I0831 22:20:45.877437 1167551 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	I0831 22:20:45.879218 1167551 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 22:20:45.880879 1167551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:20:45.882663 1167551 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:20:45.913802 1167551 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:20:45.913923 1167551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:20:45.967552 1167551 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-31 22:20:45.958230555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:20:45.967661 1167551 docker.go:307] overlay module found
	I0831 22:20:45.969576 1167551 out.go:177] * Using the docker driver based on user configuration
	I0831 22:20:45.971589 1167551 start.go:297] selected driver: docker
	I0831 22:20:45.971613 1167551 start.go:901] validating driver "docker" against <nil>
	I0831 22:20:45.971627 1167551 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:20:45.972269 1167551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:20:46.032587 1167551 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-31 22:20:46.02238216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:20:46.032807 1167551 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:20:46.033064 1167551 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:20:46.034979 1167551 out.go:177] * Using Docker driver with root privileges
	I0831 22:20:46.036604 1167551 cni.go:84] Creating CNI manager for ""
	I0831 22:20:46.036656 1167551 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0831 22:20:46.036669 1167551 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0831 22:20:46.036751 1167551 start.go:340] cluster config:
	{Name:addons-516593 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-516593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:20:46.039023 1167551 out.go:177] * Starting "addons-516593" primary control-plane node in "addons-516593" cluster
	I0831 22:20:46.040903 1167551 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0831 22:20:46.042589 1167551 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 22:20:46.044250 1167551 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0831 22:20:46.044255 1167551 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 22:20:46.044320 1167551 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0831 22:20:46.044331 1167551 cache.go:56] Caching tarball of preloaded images
	I0831 22:20:46.044412 1167551 preload.go:172] Found /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 22:20:46.044423 1167551 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0831 22:20:46.044994 1167551 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/config.json ...
	I0831 22:20:46.045025 1167551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/config.json: {Name:mk0469f48d191e59814de84e2f628e2f6014f46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:20:46.058945 1167551 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:20:46.059074 1167551 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 22:20:46.059099 1167551 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 22:20:46.059104 1167551 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 22:20:46.059116 1167551 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 22:20:46.059122 1167551 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 22:21:03.282767 1167551 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 22:21:03.282810 1167551 cache.go:194] Successfully downloaded all kic artifacts
	I0831 22:21:03.282855 1167551 start.go:360] acquireMachinesLock for addons-516593: {Name:mkf0fb3803e77a7b609c2ba63b32bdeec8bff2de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:21:03.283665 1167551 start.go:364] duration metric: took 782.471µs to acquireMachinesLock for "addons-516593"
	I0831 22:21:03.283702 1167551 start.go:93] Provisioning new machine with config: &{Name:addons-516593 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-516593 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0831 22:21:03.283785 1167551 start.go:125] createHost starting for "" (driver="docker")
	I0831 22:21:03.286015 1167551 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0831 22:21:03.286270 1167551 start.go:159] libmachine.API.Create for "addons-516593" (driver="docker")
	I0831 22:21:03.286303 1167551 client.go:168] LocalClient.Create starting
	I0831 22:21:03.286413 1167551 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem
	I0831 22:21:04.450815 1167551 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/cert.pem
	I0831 22:21:04.798097 1167551 cli_runner.go:164] Run: docker network inspect addons-516593 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0831 22:21:04.812503 1167551 cli_runner.go:211] docker network inspect addons-516593 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0831 22:21:04.812597 1167551 network_create.go:284] running [docker network inspect addons-516593] to gather additional debugging logs...
	I0831 22:21:04.812638 1167551 cli_runner.go:164] Run: docker network inspect addons-516593
	W0831 22:21:04.827659 1167551 cli_runner.go:211] docker network inspect addons-516593 returned with exit code 1
	I0831 22:21:04.827694 1167551 network_create.go:287] error running [docker network inspect addons-516593]: docker network inspect addons-516593: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-516593 not found
	I0831 22:21:04.827726 1167551 network_create.go:289] output of [docker network inspect addons-516593]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-516593 not found
	
	** /stderr **
	I0831 22:21:04.827843 1167551 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 22:21:04.843912 1167551 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004f7b90}
	I0831 22:21:04.843958 1167551 network_create.go:124] attempt to create docker network addons-516593 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0831 22:21:04.844018 1167551 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-516593 addons-516593
	I0831 22:21:04.911372 1167551 network_create.go:108] docker network addons-516593 192.168.49.0/24 created
	I0831 22:21:04.911405 1167551 kic.go:121] calculated static IP "192.168.49.2" for the "addons-516593" container
	I0831 22:21:04.911476 1167551 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0831 22:21:04.926782 1167551 cli_runner.go:164] Run: docker volume create addons-516593 --label name.minikube.sigs.k8s.io=addons-516593 --label created_by.minikube.sigs.k8s.io=true
	I0831 22:21:04.943201 1167551 oci.go:103] Successfully created a docker volume addons-516593
	I0831 22:21:04.943296 1167551 cli_runner.go:164] Run: docker run --rm --name addons-516593-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-516593 --entrypoint /usr/bin/test -v addons-516593:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib
	I0831 22:21:06.930175 1167551 cli_runner.go:217] Completed: docker run --rm --name addons-516593-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-516593 --entrypoint /usr/bin/test -v addons-516593:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib: (1.986836611s)
	I0831 22:21:06.930211 1167551 oci.go:107] Successfully prepared a docker volume addons-516593
	I0831 22:21:06.930230 1167551 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0831 22:21:06.930249 1167551 kic.go:194] Starting extracting preloaded images to volume ...
	I0831 22:21:06.930331 1167551 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-516593:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0831 22:21:11.038592 1167551 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-516593:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.108216524s)
	I0831 22:21:11.038625 1167551 kic.go:203] duration metric: took 4.108372175s to extract preloaded images to volume ...
	W0831 22:21:11.038765 1167551 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0831 22:21:11.038881 1167551 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0831 22:21:11.087145 1167551 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-516593 --name addons-516593 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-516593 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-516593 --network addons-516593 --ip 192.168.49.2 --volume addons-516593:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0
	I0831 22:21:11.389290 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Running}}
	I0831 22:21:11.410854 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:11.434401 1167551 cli_runner.go:164] Run: docker exec addons-516593 stat /var/lib/dpkg/alternatives/iptables
	I0831 22:21:11.496743 1167551 oci.go:144] the created container "addons-516593" has a running status.
	I0831 22:21:11.496771 1167551 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa...
	I0831 22:21:11.715372 1167551 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0831 22:21:11.741834 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:11.784890 1167551 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0831 22:21:11.784913 1167551 kic_runner.go:114] Args: [docker exec --privileged addons-516593 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0831 22:21:11.866222 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:11.893026 1167551 machine.go:93] provisionDockerMachine start ...
	I0831 22:21:11.893119 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:11.915465 1167551 main.go:141] libmachine: Using SSH client type: native
	I0831 22:21:11.915739 1167551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I0831 22:21:11.915755 1167551 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 22:21:11.919303 1167551 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0831 22:21:15.075502 1167551 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-516593
	
	I0831 22:21:15.075579 1167551 ubuntu.go:169] provisioning hostname "addons-516593"
	I0831 22:21:15.075718 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:15.100203 1167551 main.go:141] libmachine: Using SSH client type: native
	I0831 22:21:15.100470 1167551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I0831 22:21:15.100483 1167551 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-516593 && echo "addons-516593" | sudo tee /etc/hostname
	I0831 22:21:15.253142 1167551 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-516593
	
	I0831 22:21:15.253269 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:15.270132 1167551 main.go:141] libmachine: Using SSH client type: native
	I0831 22:21:15.270376 1167551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34249 <nil> <nil>}
	I0831 22:21:15.270397 1167551 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-516593' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-516593/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-516593' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:21:15.400449 1167551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:21:15.400477 1167551 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-1161402/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-1161402/.minikube}
	I0831 22:21:15.400505 1167551 ubuntu.go:177] setting up certificates
	I0831 22:21:15.400515 1167551 provision.go:84] configureAuth start
	I0831 22:21:15.400575 1167551 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-516593")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-516593
	I0831 22:21:15.423478 1167551 provision.go:143] copyHostCerts
	I0831 22:21:15.423568 1167551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.pem (1078 bytes)
	I0831 22:21:15.423692 1167551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-1161402/.minikube/cert.pem (1123 bytes)
	I0831 22:21:15.423754 1167551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-1161402/.minikube/key.pem (1679 bytes)
	I0831 22:21:15.423805 1167551 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca-key.pem org=jenkins.addons-516593 san=[127.0.0.1 192.168.49.2 addons-516593 localhost minikube]
	I0831 22:21:15.638970 1167551 provision.go:177] copyRemoteCerts
	I0831 22:21:15.639053 1167551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:21:15.639100 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:15.654486 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:15.749350 1167551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0831 22:21:15.772301 1167551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 22:21:15.795519 1167551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 22:21:15.818737 1167551 provision.go:87] duration metric: took 418.207746ms to configureAuth
	I0831 22:21:15.818764 1167551 ubuntu.go:193] setting minikube options for container-runtime
	I0831 22:21:15.818957 1167551 config.go:182] Loaded profile config "addons-516593": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0831 22:21:15.818971 1167551 machine.go:96] duration metric: took 3.925927423s to provisionDockerMachine
	I0831 22:21:15.818978 1167551 client.go:171] duration metric: took 12.532665686s to LocalClient.Create
	I0831 22:21:15.818996 1167551 start.go:167] duration metric: took 12.532727388s to libmachine.API.Create "addons-516593"
	I0831 22:21:15.819006 1167551 start.go:293] postStartSetup for "addons-516593" (driver="docker")
	I0831 22:21:15.819016 1167551 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:21:15.819080 1167551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:21:15.819127 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:15.834606 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:15.929639 1167551 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:21:15.932760 1167551 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 22:21:15.932799 1167551 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 22:21:15.932811 1167551 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 22:21:15.932819 1167551 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 22:21:15.932834 1167551 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-1161402/.minikube/addons for local assets ...
	I0831 22:21:15.932906 1167551 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-1161402/.minikube/files for local assets ...
	I0831 22:21:15.932931 1167551 start.go:296] duration metric: took 113.918825ms for postStartSetup
	I0831 22:21:15.933251 1167551 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-516593")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-516593
	I0831 22:21:15.948531 1167551 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/config.json ...
	I0831 22:21:15.948958 1167551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:21:15.949019 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:15.964321 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:16.073486 1167551 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 22:21:16.077717 1167551 start.go:128] duration metric: took 12.793917235s to createHost
	I0831 22:21:16.077743 1167551 start.go:83] releasing machines lock for "addons-516593", held for 12.794061382s
	I0831 22:21:16.077826 1167551 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-516593")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-516593
	I0831 22:21:16.093618 1167551 ssh_runner.go:195] Run: cat /version.json
	I0831 22:21:16.093634 1167551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:21:16.093674 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:16.093703 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:16.113210 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:16.116372 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:16.204192 1167551 ssh_runner.go:195] Run: systemctl --version
	I0831 22:21:16.332190 1167551 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 22:21:16.336689 1167551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0831 22:21:16.364787 1167551 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0831 22:21:16.364868 1167551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:21:16.392974 1167551 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0831 22:21:16.393041 1167551 start.go:495] detecting cgroup driver to use...
	I0831 22:21:16.393090 1167551 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 22:21:16.393188 1167551 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0831 22:21:16.406098 1167551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0831 22:21:16.417573 1167551 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:21:16.417647 1167551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:21:16.430849 1167551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:21:16.444363 1167551 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:21:16.534198 1167551 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:21:16.628887 1167551 docker.go:233] disabling docker service ...
	I0831 22:21:16.628989 1167551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:21:16.648480 1167551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:21:16.660839 1167551 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:21:16.741326 1167551 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:21:16.837181 1167551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:21:16.848795 1167551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:21:16.865354 1167551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0831 22:21:16.875277 1167551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0831 22:21:16.885141 1167551 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0831 22:21:16.885225 1167551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0831 22:21:16.894945 1167551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 22:21:16.905105 1167551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0831 22:21:16.914788 1167551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 22:21:16.925730 1167551 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:21:16.935127 1167551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0831 22:21:16.945281 1167551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0831 22:21:16.955256 1167551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0831 22:21:16.965549 1167551 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:21:16.973967 1167551 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:21:16.982275 1167551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:21:17.070761 1167551 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0831 22:21:17.201340 1167551 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0831 22:21:17.201479 1167551 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0831 22:21:17.205864 1167551 start.go:563] Will wait 60s for crictl version
	I0831 22:21:17.205969 1167551 ssh_runner.go:195] Run: which crictl
	I0831 22:21:17.209128 1167551 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:21:17.243845 1167551 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.21
	RuntimeApiVersion:  v1
	I0831 22:21:17.243965 1167551 ssh_runner.go:195] Run: containerd --version
	I0831 22:21:17.263975 1167551 ssh_runner.go:195] Run: containerd --version
	I0831 22:21:17.288178 1167551 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.21 ...
	I0831 22:21:17.289816 1167551 cli_runner.go:164] Run: docker network inspect addons-516593 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 22:21:17.304366 1167551 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0831 22:21:17.307672 1167551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:21:17.317883 1167551 kubeadm.go:883] updating cluster {Name:addons-516593 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-516593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:21:17.318013 1167551 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0831 22:21:17.318079 1167551 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:21:17.353319 1167551 containerd.go:627] all images are preloaded for containerd runtime.
	I0831 22:21:17.353343 1167551 containerd.go:534] Images already preloaded, skipping extraction
	I0831 22:21:17.353411 1167551 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:21:17.387456 1167551 containerd.go:627] all images are preloaded for containerd runtime.
	I0831 22:21:17.387480 1167551 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:21:17.387489 1167551 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0831 22:21:17.387913 1167551 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-516593 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-516593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:21:17.388003 1167551 ssh_runner.go:195] Run: sudo crictl info
	I0831 22:21:17.429238 1167551 cni.go:84] Creating CNI manager for ""
	I0831 22:21:17.429262 1167551 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0831 22:21:17.429271 1167551 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:21:17.429294 1167551 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-516593 NodeName:addons-516593 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:21:17.429426 1167551 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-516593"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:21:17.429497 1167551 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:21:17.437961 1167551 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:21:17.438087 1167551 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 22:21:17.446713 1167551 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0831 22:21:17.463678 1167551 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:21:17.481384 1167551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0831 22:21:17.499168 1167551 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0831 22:21:17.502889 1167551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:21:17.513555 1167551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:21:17.600764 1167551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:21:17.616138 1167551 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593 for IP: 192.168.49.2
	I0831 22:21:17.616162 1167551 certs.go:194] generating shared ca certs ...
	I0831 22:21:17.616179 1167551 certs.go:226] acquiring lock for ca certs: {Name:mk34cb0d7c9ce07dfc3fb4f77a59e5e1d853f8c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:21:17.616304 1167551 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.key
	I0831 22:21:18.365071 1167551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.crt ...
	I0831 22:21:18.365105 1167551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.crt: {Name:mk14641453c4a7c8825f06d44ad5f5381f5cebea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:21:18.365894 1167551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.key ...
	I0831 22:21:18.365912 1167551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.key: {Name:mk40b52ebb9c7bc33c8bc57b798c6940b3fa2d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:21:18.366012 1167551 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/proxy-client-ca.key
	I0831 22:21:19.169333 1167551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-1161402/.minikube/proxy-client-ca.crt ...
	I0831 22:21:19.169364 1167551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/.minikube/proxy-client-ca.crt: {Name:mk0eb61a64b4acdb03185c2f0df0cd1459fb3a63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:21:19.170121 1167551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-1161402/.minikube/proxy-client-ca.key ...
	I0831 22:21:19.170141 1167551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/.minikube/proxy-client-ca.key: {Name:mk84abc3a7a095cd375ee9b6789046e5440a7603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:21:19.170275 1167551 certs.go:256] generating profile certs ...
	I0831 22:21:19.170353 1167551 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.key
	I0831 22:21:19.170373 1167551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt with IP's: []
	I0831 22:21:19.416632 1167551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt ...
	I0831 22:21:19.416669 1167551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: {Name:mk6fb2fe08c534e9b9ea7daa85e1e2ad12a0bc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:21:19.417239 1167551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.key ...
	I0831 22:21:19.417258 1167551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.key: {Name:mk380c19ff52f0fb34f92ac5d1340f4db720b332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:21:19.417698 1167551 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/apiserver.key.db6378eb
	I0831 22:21:19.417727 1167551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/apiserver.crt.db6378eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0831 22:21:19.744871 1167551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/apiserver.crt.db6378eb ...
	I0831 22:21:19.744908 1167551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/apiserver.crt.db6378eb: {Name:mk89a9862a8770b8eaa9b82b01f6a08d8efde960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:21:19.745100 1167551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/apiserver.key.db6378eb ...
	I0831 22:21:19.745116 1167551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/apiserver.key.db6378eb: {Name:mk6a56abfc4fd6e9c544fed66851edb43cf51225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:21:19.745582 1167551 certs.go:381] copying /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/apiserver.crt.db6378eb -> /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/apiserver.crt
	I0831 22:21:19.745680 1167551 certs.go:385] copying /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/apiserver.key.db6378eb -> /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/apiserver.key
	I0831 22:21:19.745732 1167551 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/proxy-client.key
	I0831 22:21:19.745756 1167551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/proxy-client.crt with IP's: []
	I0831 22:21:20.071825 1167551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/proxy-client.crt ...
	I0831 22:21:20.071861 1167551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/proxy-client.crt: {Name:mkddf7346615a4a3522b2d211cd5cef8e73e645c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:21:20.072059 1167551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/proxy-client.key ...
	I0831 22:21:20.072076 1167551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/proxy-client.key: {Name:mk32735c35282bf9ab8905d55761c24b9c2493c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:21:20.072757 1167551 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 22:21:20.072806 1167551 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem (1078 bytes)
	I0831 22:21:20.072836 1167551 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:21:20.072870 1167551 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/key.pem (1679 bytes)
	I0831 22:21:20.073529 1167551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:21:20.100909 1167551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0831 22:21:20.126848 1167551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:21:20.152743 1167551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:21:20.177621 1167551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0831 22:21:20.203400 1167551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 22:21:20.230438 1167551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:21:20.256089 1167551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 22:21:20.280055 1167551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:21:20.304223 1167551 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:21:20.321621 1167551 ssh_runner.go:195] Run: openssl version
	I0831 22:21:20.326769 1167551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:21:20.335827 1167551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:21:20.339090 1167551 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:21 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:21:20.339157 1167551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:21:20.345758 1167551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:21:20.354988 1167551 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:21:20.358156 1167551 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:21:20.358210 1167551 kubeadm.go:392] StartCluster: {Name:addons-516593 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-516593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:21:20.358307 1167551 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0831 22:21:20.358370 1167551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 22:21:20.393230 1167551 cri.go:89] found id: ""
	I0831 22:21:20.393298 1167551 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 22:21:20.401737 1167551 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 22:21:20.410167 1167551 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0831 22:21:20.410253 1167551 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 22:21:20.418779 1167551 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 22:21:20.418797 1167551 kubeadm.go:157] found existing configuration files:
	
	I0831 22:21:20.418847 1167551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 22:21:20.427683 1167551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 22:21:20.427775 1167551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 22:21:20.435526 1167551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 22:21:20.443938 1167551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 22:21:20.444000 1167551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 22:21:20.452283 1167551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 22:21:20.460771 1167551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 22:21:20.460848 1167551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 22:21:20.469324 1167551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 22:21:20.477472 1167551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 22:21:20.477567 1167551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 22:21:20.485672 1167551 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0831 22:21:20.526239 1167551 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0831 22:21:20.526548 1167551 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 22:21:20.549350 1167551 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0831 22:21:20.549424 1167551 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0831 22:21:20.549468 1167551 kubeadm.go:310] OS: Linux
	I0831 22:21:20.549516 1167551 kubeadm.go:310] CGROUPS_CPU: enabled
	I0831 22:21:20.549574 1167551 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0831 22:21:20.549624 1167551 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0831 22:21:20.549674 1167551 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0831 22:21:20.549722 1167551 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0831 22:21:20.549775 1167551 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0831 22:21:20.549822 1167551 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0831 22:21:20.549872 1167551 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0831 22:21:20.549919 1167551 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0831 22:21:20.620507 1167551 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 22:21:20.620664 1167551 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 22:21:20.620799 1167551 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0831 22:21:20.633023 1167551 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 22:21:20.636651 1167551 out.go:235]   - Generating certificates and keys ...
	I0831 22:21:20.636800 1167551 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 22:21:20.636875 1167551 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 22:21:20.879653 1167551 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0831 22:21:21.020158 1167551 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0831 22:21:21.173385 1167551 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0831 22:21:21.845399 1167551 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0831 22:21:22.531931 1167551 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0831 22:21:22.532274 1167551 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-516593 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0831 22:21:22.834995 1167551 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0831 22:21:22.835386 1167551 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-516593 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0831 22:21:23.494621 1167551 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0831 22:21:24.086776 1167551 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0831 22:21:24.223773 1167551 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0831 22:21:24.224143 1167551 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 22:21:24.543863 1167551 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 22:21:25.104574 1167551 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0831 22:21:25.231630 1167551 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 22:21:25.639300 1167551 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 22:21:26.008284 1167551 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 22:21:26.008386 1167551 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 22:21:26.009841 1167551 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 22:21:26.012227 1167551 out.go:235]   - Booting up control plane ...
	I0831 22:21:26.012342 1167551 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 22:21:26.013826 1167551 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 22:21:26.015300 1167551 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 22:21:26.027691 1167551 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 22:21:26.034995 1167551 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 22:21:26.035064 1167551 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 22:21:26.133308 1167551 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0831 22:21:26.133424 1167551 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0831 22:21:27.134647 1167551 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001608609s
	I0831 22:21:27.134739 1167551 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0831 22:21:34.136907 1167551 kubeadm.go:310] [api-check] The API server is healthy after 7.002200145s
	I0831 22:21:34.157698 1167551 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 22:21:34.169547 1167551 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 22:21:34.192725 1167551 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 22:21:34.192918 1167551 kubeadm.go:310] [mark-control-plane] Marking the node addons-516593 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 22:21:34.205991 1167551 kubeadm.go:310] [bootstrap-token] Using token: wy6k04.dwdedxh50ub31j91
	I0831 22:21:34.208013 1167551 out.go:235]   - Configuring RBAC rules ...
	I0831 22:21:34.208146 1167551 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 22:21:34.212723 1167551 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 22:21:34.220352 1167551 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 22:21:34.225625 1167551 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 22:21:34.229472 1167551 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 22:21:34.233184 1167551 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 22:21:34.543815 1167551 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 22:21:34.975524 1167551 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 22:21:35.543121 1167551 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 22:21:35.544258 1167551 kubeadm.go:310] 
	I0831 22:21:35.544363 1167551 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 22:21:35.544377 1167551 kubeadm.go:310] 
	I0831 22:21:35.544459 1167551 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 22:21:35.544465 1167551 kubeadm.go:310] 
	I0831 22:21:35.544490 1167551 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 22:21:35.544559 1167551 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 22:21:35.544644 1167551 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 22:21:35.544657 1167551 kubeadm.go:310] 
	I0831 22:21:35.544710 1167551 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 22:21:35.544719 1167551 kubeadm.go:310] 
	I0831 22:21:35.544765 1167551 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 22:21:35.544773 1167551 kubeadm.go:310] 
	I0831 22:21:35.544824 1167551 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 22:21:35.544906 1167551 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 22:21:35.544976 1167551 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 22:21:35.544984 1167551 kubeadm.go:310] 
	I0831 22:21:35.545066 1167551 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 22:21:35.545143 1167551 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 22:21:35.545154 1167551 kubeadm.go:310] 
	I0831 22:21:35.545235 1167551 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wy6k04.dwdedxh50ub31j91 \
	I0831 22:21:35.545337 1167551 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a5b4b578bcd867c96b8c659a8dd278a982ac32bd23e7e4d4ed24d4a9632d6c1f \
	I0831 22:21:35.545362 1167551 kubeadm.go:310] 	--control-plane 
	I0831 22:21:35.545370 1167551 kubeadm.go:310] 
	I0831 22:21:35.545452 1167551 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 22:21:35.545460 1167551 kubeadm.go:310] 
	I0831 22:21:35.545539 1167551 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wy6k04.dwdedxh50ub31j91 \
	I0831 22:21:35.545640 1167551 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a5b4b578bcd867c96b8c659a8dd278a982ac32bd23e7e4d4ed24d4a9632d6c1f 
	I0831 22:21:35.549894 1167551 kubeadm.go:310] W0831 22:21:20.522801    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:21:35.550180 1167551 kubeadm.go:310] W0831 22:21:20.523806    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:21:35.550386 1167551 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0831 22:21:35.550488 1167551 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 22:21:35.550508 1167551 cni.go:84] Creating CNI manager for ""
	I0831 22:21:35.550516 1167551 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0831 22:21:35.552754 1167551 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0831 22:21:35.554655 1167551 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0831 22:21:35.558392 1167551 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0831 22:21:35.558414 1167551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0831 22:21:35.578936 1167551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0831 22:21:35.847979 1167551 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 22:21:35.848115 1167551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:21:35.848204 1167551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-516593 minikube.k8s.io/updated_at=2024_08_31T22_21_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=addons-516593 minikube.k8s.io/primary=true
	I0831 22:21:36.030420 1167551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:21:36.030487 1167551 ops.go:34] apiserver oom_adj: -16
	I0831 22:21:36.530744 1167551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:21:37.033039 1167551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:21:37.530907 1167551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:21:38.031228 1167551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:21:38.530700 1167551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:21:39.031254 1167551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:21:39.530542 1167551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:21:39.620700 1167551 kubeadm.go:1113] duration metric: took 3.772632714s to wait for elevateKubeSystemPrivileges
	I0831 22:21:39.620738 1167551 kubeadm.go:394] duration metric: took 19.26253112s to StartCluster
	I0831 22:21:39.620762 1167551 settings.go:142] acquiring lock: {Name:mkccd5b6f7cf87789c72627e47240ed1100ed135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:21:39.621470 1167551 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-1161402/kubeconfig
	I0831 22:21:39.621860 1167551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/kubeconfig: {Name:mkb68eea79d6c84410a77cb04886486384945560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:21:39.622506 1167551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0831 22:21:39.622536 1167551 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0831 22:21:39.622793 1167551 config.go:182] Loaded profile config "addons-516593": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0831 22:21:39.622834 1167551 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0831 22:21:39.622918 1167551 addons.go:69] Setting yakd=true in profile "addons-516593"
	I0831 22:21:39.622940 1167551 addons.go:234] Setting addon yakd=true in "addons-516593"
	I0831 22:21:39.622983 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.623441 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.624145 1167551 addons.go:69] Setting metrics-server=true in profile "addons-516593"
	I0831 22:21:39.624173 1167551 addons.go:234] Setting addon metrics-server=true in "addons-516593"
	I0831 22:21:39.624198 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.624646 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.625158 1167551 out.go:177] * Verifying Kubernetes components...
	I0831 22:21:39.625853 1167551 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-516593"
	I0831 22:21:39.625936 1167551 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-516593"
	I0831 22:21:39.626022 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.626728 1167551 addons.go:69] Setting registry=true in profile "addons-516593"
	I0831 22:21:39.626767 1167551 addons.go:234] Setting addon registry=true in "addons-516593"
	I0831 22:21:39.626795 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.627331 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.628167 1167551 addons.go:69] Setting cloud-spanner=true in profile "addons-516593"
	I0831 22:21:39.628208 1167551 addons.go:234] Setting addon cloud-spanner=true in "addons-516593"
	I0831 22:21:39.628260 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.628767 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.630780 1167551 addons.go:69] Setting storage-provisioner=true in profile "addons-516593"
	I0831 22:21:39.630816 1167551 addons.go:234] Setting addon storage-provisioner=true in "addons-516593"
	I0831 22:21:39.630890 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.631515 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.636733 1167551 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-516593"
	I0831 22:21:39.636810 1167551 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-516593"
	I0831 22:21:39.636844 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.637289 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.648915 1167551 addons.go:69] Setting default-storageclass=true in profile "addons-516593"
	I0831 22:21:39.649012 1167551 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-516593"
	I0831 22:21:39.649458 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.656815 1167551 addons.go:69] Setting gcp-auth=true in profile "addons-516593"
	I0831 22:21:39.656878 1167551 mustload.go:65] Loading cluster: addons-516593
	I0831 22:21:39.657076 1167551 config.go:182] Loaded profile config "addons-516593": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0831 22:21:39.657339 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.659897 1167551 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-516593"
	I0831 22:21:39.659933 1167551 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-516593"
	I0831 22:21:39.660316 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.668145 1167551 addons.go:69] Setting ingress=true in profile "addons-516593"
	I0831 22:21:39.668197 1167551 addons.go:234] Setting addon ingress=true in "addons-516593"
	I0831 22:21:39.668248 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.668790 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.670158 1167551 addons.go:69] Setting volcano=true in profile "addons-516593"
	I0831 22:21:39.670192 1167551 addons.go:234] Setting addon volcano=true in "addons-516593"
	I0831 22:21:39.670226 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.670657 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.688703 1167551 addons.go:69] Setting ingress-dns=true in profile "addons-516593"
	I0831 22:21:39.688756 1167551 addons.go:234] Setting addon ingress-dns=true in "addons-516593"
	I0831 22:21:39.688804 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.689263 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.689422 1167551 addons.go:69] Setting volumesnapshots=true in profile "addons-516593"
	I0831 22:21:39.689440 1167551 addons.go:234] Setting addon volumesnapshots=true in "addons-516593"
	I0831 22:21:39.689463 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.689833 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.705956 1167551 addons.go:69] Setting inspektor-gadget=true in profile "addons-516593"
	I0831 22:21:39.706057 1167551 addons.go:234] Setting addon inspektor-gadget=true in "addons-516593"
	I0831 22:21:39.706132 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.706829 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.707646 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.748235 1167551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:21:39.810567 1167551 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0831 22:21:39.817624 1167551 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0831 22:21:39.817651 1167551 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0831 22:21:39.817721 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:39.828785 1167551 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0831 22:21:39.829432 1167551 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0831 22:21:39.831100 1167551 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0831 22:21:39.831125 1167551 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0831 22:21:39.831200 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:39.834698 1167551 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0831 22:21:39.834721 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0831 22:21:39.834782 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:39.845428 1167551 out.go:177]   - Using image docker.io/registry:2.8.3
	I0831 22:21:39.847796 1167551 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0831 22:21:39.852792 1167551 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0831 22:21:39.852820 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0831 22:21:39.852890 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:39.880082 1167551 addons.go:234] Setting addon default-storageclass=true in "addons-516593"
	I0831 22:21:39.880126 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.880562 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.884728 1167551 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 22:21:39.892830 1167551 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:21:39.892854 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 22:21:39.892921 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:39.899442 1167551 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0831 22:21:39.901584 1167551 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:21:39.901611 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0831 22:21:39.901692 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:39.918035 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.920290 1167551 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-516593"
	I0831 22:21:39.920324 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:39.924956 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:39.937656 1167551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0831 22:21:39.969404 1167551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0831 22:21:39.972758 1167551 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0831 22:21:39.974695 1167551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0831 22:21:39.974717 1167551 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0831 22:21:39.974787 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:39.974958 1167551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0831 22:21:39.978096 1167551 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0831 22:21:39.988379 1167551 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0831 22:21:39.988978 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:39.989629 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:40.002623 1167551 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0831 22:21:40.024765 1167551 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0831 22:21:40.024826 1167551 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0831 22:21:40.033251 1167551 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0831 22:21:40.036834 1167551 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0831 22:21:40.036866 1167551 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0831 22:21:40.036959 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:40.040691 1167551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0831 22:21:40.043286 1167551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0831 22:21:40.043422 1167551 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0831 22:21:40.043562 1167551 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:21:40.043577 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0831 22:21:40.043650 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:40.069973 1167551 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0831 22:21:40.070005 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0831 22:21:40.070083 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:40.098508 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:40.108069 1167551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0831 22:21:40.108371 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:40.112500 1167551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0831 22:21:40.115099 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:40.115900 1167551 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 22:21:40.115918 1167551 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 22:21:40.115974 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:40.116702 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:40.117381 1167551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0831 22:21:40.117396 1167551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0831 22:21:40.117465 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:40.129023 1167551 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:21:40.132580 1167551 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:21:40.133400 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:40.140802 1167551 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0831 22:21:40.143526 1167551 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:21:40.143561 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0831 22:21:40.143630 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:40.155394 1167551 out.go:177]   - Using image docker.io/busybox:stable
	I0831 22:21:40.159106 1167551 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0831 22:21:40.165019 1167551 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:21:40.165041 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0831 22:21:40.165112 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:40.165374 1167551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:21:40.221722 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:40.236007 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:40.236910 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:40.267020 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:40.270203 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	W0831 22:21:40.270655 1167551 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0831 22:21:40.270676 1167551 retry.go:31] will retry after 132.073652ms: ssh: handshake failed: EOF
	I0831 22:21:40.272756 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:40.279004 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:40.808973 1167551 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0831 22:21:40.809065 1167551 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0831 22:21:40.814786 1167551 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0831 22:21:40.814805 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0831 22:21:40.912420 1167551 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0831 22:21:40.912497 1167551 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0831 22:21:40.981731 1167551 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0831 22:21:40.981805 1167551 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0831 22:21:41.067625 1167551 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:21:41.067698 1167551 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0831 22:21:41.077103 1167551 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0831 22:21:41.077181 1167551 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0831 22:21:41.093108 1167551 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0831 22:21:41.093186 1167551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0831 22:21:41.102701 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0831 22:21:41.131219 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:21:41.160356 1167551 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0831 22:21:41.160435 1167551 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0831 22:21:41.168608 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:21:41.170942 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:21:41.216080 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 22:21:41.219297 1167551 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0831 22:21:41.219358 1167551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0831 22:21:41.227724 1167551 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0831 22:21:41.227804 1167551 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0831 22:21:41.233777 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0831 22:21:41.253198 1167551 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0831 22:21:41.253278 1167551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0831 22:21:41.280048 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:21:41.286776 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:21:41.335321 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:21:41.384652 1167551 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:21:41.384724 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0831 22:21:41.472591 1167551 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0831 22:21:41.472727 1167551 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0831 22:21:41.601902 1167551 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0831 22:21:41.601985 1167551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0831 22:21:41.660879 1167551 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0831 22:21:41.660965 1167551 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0831 22:21:41.664603 1167551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0831 22:21:41.664746 1167551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0831 22:21:41.840232 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:21:41.848982 1167551 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:21:41.849064 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0831 22:21:41.920448 1167551 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0831 22:21:41.920530 1167551 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0831 22:21:41.975261 1167551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0831 22:21:41.975345 1167551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0831 22:21:42.043074 1167551 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0831 22:21:42.043154 1167551 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0831 22:21:42.360601 1167551 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0831 22:21:42.360761 1167551 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0831 22:21:42.401570 1167551 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.463875175s)
	I0831 22:21:42.401652 1167551 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0831 22:21:42.401832 1167551 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.236440633s)
	I0831 22:21:42.403497 1167551 node_ready.go:35] waiting up to 6m0s for node "addons-516593" to be "Ready" ...
	I0831 22:21:42.409748 1167551 node_ready.go:49] node "addons-516593" has status "Ready":"True"
	I0831 22:21:42.409821 1167551 node_ready.go:38] duration metric: took 6.039536ms for node "addons-516593" to be "Ready" ...
	I0831 22:21:42.409835 1167551 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:21:42.440180 1167551 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8d6s7" in "kube-system" namespace to be "Ready" ...
	I0831 22:21:42.440793 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:21:42.491704 1167551 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:21:42.491726 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0831 22:21:42.681082 1167551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0831 22:21:42.681112 1167551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0831 22:21:42.833909 1167551 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0831 22:21:42.833940 1167551 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0831 22:21:42.907857 1167551 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-516593" context rescaled to 1 replicas
	I0831 22:21:43.103185 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:21:43.229533 1167551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0831 22:21:43.229562 1167551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0831 22:21:43.336492 1167551 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:21:43.336520 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0831 22:21:43.619536 1167551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0831 22:21:43.619562 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0831 22:21:43.661475 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:21:43.923649 1167551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0831 22:21:43.923678 1167551 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0831 22:21:44.067299 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.964522781s)
	I0831 22:21:44.067364 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.936084816s)
	I0831 22:21:44.129958 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.961231518s)
	I0831 22:21:44.402210 1167551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0831 22:21:44.402235 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0831 22:21:44.512361 1167551 pod_ready.go:103] pod "coredns-6f6b679f8f-8d6s7" in "kube-system" namespace has status "Ready":"False"
	I0831 22:21:44.794521 1167551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0831 22:21:44.794545 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0831 22:21:45.125141 1167551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:21:45.125175 1167551 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0831 22:21:45.377130 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.206098044s)
	I0831 22:21:45.377189 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.161039592s)
	I0831 22:21:45.585908 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:21:46.954802 1167551 pod_ready.go:103] pod "coredns-6f6b679f8f-8d6s7" in "kube-system" namespace has status "Ready":"False"
	I0831 22:21:47.194999 1167551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0831 22:21:47.195157 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:47.222495 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:47.747126 1167551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0831 22:21:47.825206 1167551 addons.go:234] Setting addon gcp-auth=true in "addons-516593"
	I0831 22:21:47.825302 1167551 host.go:66] Checking if "addons-516593" exists ...
	I0831 22:21:47.825798 1167551 cli_runner.go:164] Run: docker container inspect addons-516593 --format={{.State.Status}}
	I0831 22:21:47.849303 1167551 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0831 22:21:47.849366 1167551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-516593
	I0831 22:21:47.870754 1167551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34249 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/addons-516593/id_rsa Username:docker}
	I0831 22:21:48.992862 1167551 pod_ready.go:103] pod "coredns-6f6b679f8f-8d6s7" in "kube-system" namespace has status "Ready":"False"
	I0831 22:21:50.105147 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.871284846s)
	I0831 22:21:50.105374 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.825246286s)
	I0831 22:21:50.105409 1167551 addons.go:475] Verifying addon ingress=true in "addons-516593"
	I0831 22:21:50.105766 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.818826378s)
	I0831 22:21:50.105928 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.770522923s)
	I0831 22:21:50.105961 1167551 addons.go:475] Verifying addon metrics-server=true in "addons-516593"
	I0831 22:21:50.106099 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.265786518s)
	I0831 22:21:50.106139 1167551 addons.go:475] Verifying addon registry=true in "addons-516593"
	I0831 22:21:50.106464 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.66561998s)
	I0831 22:21:50.106647 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.44513375s)
	I0831 22:21:50.106684 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.003356139s)
	W0831 22:21:50.107745 1167551 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:21:50.107763 1167551 retry.go:31] will retry after 363.225947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:21:50.108034 1167551 out.go:177] * Verifying ingress addon...
	I0831 22:21:50.111410 1167551 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-516593 service yakd-dashboard -n yakd-dashboard
	
	I0831 22:21:50.111524 1167551 out.go:177] * Verifying registry addon...
	I0831 22:21:50.114016 1167551 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0831 22:21:50.115242 1167551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0831 22:21:50.151556 1167551 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0831 22:21:50.151634 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:50.152705 1167551 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0831 22:21:50.152773 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:50.471363 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:21:50.623979 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:50.624390 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:51.099590 1167551 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.25025072s)
	I0831 22:21:51.099828 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.513872517s)
	I0831 22:21:51.099871 1167551 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-516593"
	I0831 22:21:51.101643 1167551 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:21:51.101679 1167551 out.go:177] * Verifying csi-hostpath-driver addon...
	I0831 22:21:51.104007 1167551 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0831 22:21:51.104790 1167551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0831 22:21:51.106242 1167551 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0831 22:21:51.106313 1167551 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0831 22:21:51.128042 1167551 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0831 22:21:51.128120 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:51.134508 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:51.135581 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:51.217028 1167551 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0831 22:21:51.217105 1167551 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0831 22:21:51.280863 1167551 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:21:51.280934 1167551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0831 22:21:51.340762 1167551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:21:51.446722 1167551 pod_ready.go:103] pod "coredns-6f6b679f8f-8d6s7" in "kube-system" namespace has status "Ready":"False"
	I0831 22:21:51.612703 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:51.711535 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:51.711731 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:51.993301 1167551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.521842277s)
	I0831 22:21:52.110391 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:52.119701 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:52.120974 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:52.324664 1167551 addons.go:475] Verifying addon gcp-auth=true in "addons-516593"
	I0831 22:21:52.328604 1167551 out.go:177] * Verifying gcp-auth addon...
	I0831 22:21:52.331521 1167551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0831 22:21:52.346230 1167551 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 22:21:52.610517 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:52.620513 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:52.621979 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:53.109668 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:53.119453 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:53.120541 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:53.451423 1167551 pod_ready.go:103] pod "coredns-6f6b679f8f-8d6s7" in "kube-system" namespace has status "Ready":"False"
	I0831 22:21:53.611240 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:53.620777 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:53.622707 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:54.110433 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:54.119519 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:54.120998 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:54.610229 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:54.619923 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:54.620995 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:55.111410 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:55.135965 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:55.136795 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:55.453602 1167551 pod_ready.go:103] pod "coredns-6f6b679f8f-8d6s7" in "kube-system" namespace has status "Ready":"False"
	I0831 22:21:55.610228 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:55.617911 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:55.619970 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:56.109886 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:56.118199 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:56.119650 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:56.610572 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:56.620164 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:56.621895 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:57.109927 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:57.119720 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:57.120272 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:57.609879 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:57.619256 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:57.620136 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:57.949972 1167551 pod_ready.go:103] pod "coredns-6f6b679f8f-8d6s7" in "kube-system" namespace has status "Ready":"False"
	I0831 22:21:58.114530 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:58.118824 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:58.121980 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:58.610504 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:58.618412 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:58.620077 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:59.110511 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:59.119750 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:59.123788 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:21:59.610724 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:21:59.620139 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:21:59.709912 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:00.141679 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:00.160358 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:00.162586 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:00.452178 1167551 pod_ready.go:103] pod "coredns-6f6b679f8f-8d6s7" in "kube-system" namespace has status "Ready":"False"
	I0831 22:22:00.609869 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:00.619278 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:00.620703 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:01.110771 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:01.120173 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:01.121452 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:01.610413 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:01.618526 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:01.621237 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:02.110271 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:02.118667 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:02.120908 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:02.610662 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:02.620943 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:02.622213 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:02.947339 1167551 pod_ready.go:103] pod "coredns-6f6b679f8f-8d6s7" in "kube-system" namespace has status "Ready":"False"
	I0831 22:22:03.110536 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:03.119358 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:03.120290 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:03.610621 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:03.618998 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:03.621418 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:04.110786 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:04.118927 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:04.120554 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:04.609805 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:04.619483 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:04.620759 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:04.958873 1167551 pod_ready.go:103] pod "coredns-6f6b679f8f-8d6s7" in "kube-system" namespace has status "Ready":"False"
	I0831 22:22:05.111131 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:05.130449 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:05.131556 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:05.645784 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:05.649950 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:05.650496 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:06.115671 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:06.123614 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:06.125363 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:06.611108 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:06.620449 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:06.621748 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:07.113191 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:07.119272 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:07.122675 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:07.453738 1167551 pod_ready.go:103] pod "coredns-6f6b679f8f-8d6s7" in "kube-system" namespace has status "Ready":"False"
	I0831 22:22:07.609884 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:07.618977 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:07.619742 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:08.111024 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:08.118536 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:08.121280 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:08.458523 1167551 pod_ready.go:93] pod "coredns-6f6b679f8f-8d6s7" in "kube-system" namespace has status "Ready":"True"
	I0831 22:22:08.458551 1167551 pod_ready.go:82] duration metric: took 26.018297034s for pod "coredns-6f6b679f8f-8d6s7" in "kube-system" namespace to be "Ready" ...
	I0831 22:22:08.458563 1167551 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-g88q9" in "kube-system" namespace to be "Ready" ...
	I0831 22:22:08.461582 1167551 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-g88q9" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-g88q9" not found
	I0831 22:22:08.461612 1167551 pod_ready.go:82] duration metric: took 3.040925ms for pod "coredns-6f6b679f8f-g88q9" in "kube-system" namespace to be "Ready" ...
	E0831 22:22:08.461624 1167551 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-g88q9" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-g88q9" not found
	I0831 22:22:08.461632 1167551 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-516593" in "kube-system" namespace to be "Ready" ...
	I0831 22:22:08.467354 1167551 pod_ready.go:93] pod "etcd-addons-516593" in "kube-system" namespace has status "Ready":"True"
	I0831 22:22:08.467381 1167551 pod_ready.go:82] duration metric: took 5.742429ms for pod "etcd-addons-516593" in "kube-system" namespace to be "Ready" ...
	I0831 22:22:08.467395 1167551 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-516593" in "kube-system" namespace to be "Ready" ...
	I0831 22:22:08.473486 1167551 pod_ready.go:93] pod "kube-apiserver-addons-516593" in "kube-system" namespace has status "Ready":"True"
	I0831 22:22:08.473512 1167551 pod_ready.go:82] duration metric: took 6.108951ms for pod "kube-apiserver-addons-516593" in "kube-system" namespace to be "Ready" ...
	I0831 22:22:08.473524 1167551 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-516593" in "kube-system" namespace to be "Ready" ...
	I0831 22:22:08.488032 1167551 pod_ready.go:93] pod "kube-controller-manager-addons-516593" in "kube-system" namespace has status "Ready":"True"
	I0831 22:22:08.488059 1167551 pod_ready.go:82] duration metric: took 14.527285ms for pod "kube-controller-manager-addons-516593" in "kube-system" namespace to be "Ready" ...
	I0831 22:22:08.488072 1167551 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8tqf5" in "kube-system" namespace to be "Ready" ...
	I0831 22:22:08.613975 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:08.620351 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:08.621308 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:08.645587 1167551 pod_ready.go:93] pod "kube-proxy-8tqf5" in "kube-system" namespace has status "Ready":"True"
	I0831 22:22:08.645616 1167551 pod_ready.go:82] duration metric: took 157.535887ms for pod "kube-proxy-8tqf5" in "kube-system" namespace to be "Ready" ...
	I0831 22:22:08.645629 1167551 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-516593" in "kube-system" namespace to be "Ready" ...
	I0831 22:22:09.045110 1167551 pod_ready.go:93] pod "kube-scheduler-addons-516593" in "kube-system" namespace has status "Ready":"True"
	I0831 22:22:09.045137 1167551 pod_ready.go:82] duration metric: took 399.500256ms for pod "kube-scheduler-addons-516593" in "kube-system" namespace to be "Ready" ...
	I0831 22:22:09.045147 1167551 pod_ready.go:39] duration metric: took 26.635290351s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:22:09.045183 1167551 api_server.go:52] waiting for apiserver process to appear ...
	I0831 22:22:09.045263 1167551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:22:09.061057 1167551 api_server.go:72] duration metric: took 29.438486169s to wait for apiserver process to appear ...
	I0831 22:22:09.061124 1167551 api_server.go:88] waiting for apiserver healthz status ...
	I0831 22:22:09.061159 1167551 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 22:22:09.068776 1167551 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0831 22:22:09.069884 1167551 api_server.go:141] control plane version: v1.31.0
	I0831 22:22:09.069912 1167551 api_server.go:131] duration metric: took 8.76751ms to wait for apiserver health ...
	I0831 22:22:09.069921 1167551 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 22:22:09.114100 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:09.119033 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:09.120810 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:09.254834 1167551 system_pods.go:59] 18 kube-system pods found
	I0831 22:22:09.254876 1167551 system_pods.go:61] "coredns-6f6b679f8f-8d6s7" [96b6ae8a-1a43-48e7-b657-b82e45942d1e] Running
	I0831 22:22:09.254910 1167551 system_pods.go:61] "csi-hostpath-attacher-0" [2398f40c-fab2-4dab-9f9a-18549c00556e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0831 22:22:09.254927 1167551 system_pods.go:61] "csi-hostpath-resizer-0" [c9703003-1833-4f16-8ed6-2ac521f8f412] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0831 22:22:09.254936 1167551 system_pods.go:61] "csi-hostpathplugin-kqw2s" [b43db5f3-3100-456a-9c44-0eb60a3924c2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0831 22:22:09.254947 1167551 system_pods.go:61] "etcd-addons-516593" [dc705c87-6670-4ffa-abb6-2bc7c61e31f8] Running
	I0831 22:22:09.254952 1167551 system_pods.go:61] "kindnet-qbd29" [384d3c8e-e06d-4422-827f-4d99c0c87f40] Running
	I0831 22:22:09.254960 1167551 system_pods.go:61] "kube-apiserver-addons-516593" [8d525cbf-1a1c-407d-a066-8f96d54e65ec] Running
	I0831 22:22:09.254965 1167551 system_pods.go:61] "kube-controller-manager-addons-516593" [a8f45426-fb2f-4033-8ccf-68ec169f06ce] Running
	I0831 22:22:09.254972 1167551 system_pods.go:61] "kube-ingress-dns-minikube" [ce27c9e8-d608-4e27-b658-25b6be9387a9] Running
	I0831 22:22:09.254996 1167551 system_pods.go:61] "kube-proxy-8tqf5" [c1866d47-ae1b-45a9-836b-d19852a82870] Running
	I0831 22:22:09.255001 1167551 system_pods.go:61] "kube-scheduler-addons-516593" [95866f50-759d-4cfb-8122-48a830cf0097] Running
	I0831 22:22:09.255005 1167551 system_pods.go:61] "metrics-server-84c5f94fbc-kmkrh" [04a55105-b90d-4b97-af79-0063fbcb110c] Running
	I0831 22:22:09.255018 1167551 system_pods.go:61] "nvidia-device-plugin-daemonset-bb285" [df44e7cc-7587-4732-a752-a37f7c187b90] Running
	I0831 22:22:09.255024 1167551 system_pods.go:61] "registry-6fb4cdfc84-wvmsl" [040a9b4e-0596-41c6-9739-2a1d51dfac80] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0831 22:22:09.255030 1167551 system_pods.go:61] "registry-proxy-z5ckz" [5f1deaf5-fb65-4500-bb12-b3e76411722b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0831 22:22:09.255052 1167551 system_pods.go:61] "snapshot-controller-56fcc65765-gv9mw" [c7b53e79-f922-4d17-8b35-83fbf0fed73d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:22:09.255069 1167551 system_pods.go:61] "snapshot-controller-56fcc65765-tmmfs" [0f63d262-c796-4952-95db-e8269c3e96ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:22:09.255074 1167551 system_pods.go:61] "storage-provisioner" [ea02dd28-a425-4ca7-8c85-f0b0f777217d] Running
	I0831 22:22:09.255089 1167551 system_pods.go:74] duration metric: took 185.154138ms to wait for pod list to return data ...
	I0831 22:22:09.255107 1167551 default_sa.go:34] waiting for default service account to be created ...
	I0831 22:22:09.516782 1167551 default_sa.go:45] found service account: "default"
	I0831 22:22:09.516812 1167551 default_sa.go:55] duration metric: took 261.693367ms for default service account to be created ...
	I0831 22:22:09.516823 1167551 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 22:22:09.630485 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:09.634397 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:09.635559 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:09.652424 1167551 system_pods.go:86] 18 kube-system pods found
	I0831 22:22:09.652464 1167551 system_pods.go:89] "coredns-6f6b679f8f-8d6s7" [96b6ae8a-1a43-48e7-b657-b82e45942d1e] Running
	I0831 22:22:09.652475 1167551 system_pods.go:89] "csi-hostpath-attacher-0" [2398f40c-fab2-4dab-9f9a-18549c00556e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0831 22:22:09.652482 1167551 system_pods.go:89] "csi-hostpath-resizer-0" [c9703003-1833-4f16-8ed6-2ac521f8f412] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0831 22:22:09.652490 1167551 system_pods.go:89] "csi-hostpathplugin-kqw2s" [b43db5f3-3100-456a-9c44-0eb60a3924c2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0831 22:22:09.652494 1167551 system_pods.go:89] "etcd-addons-516593" [dc705c87-6670-4ffa-abb6-2bc7c61e31f8] Running
	I0831 22:22:09.652499 1167551 system_pods.go:89] "kindnet-qbd29" [384d3c8e-e06d-4422-827f-4d99c0c87f40] Running
	I0831 22:22:09.652503 1167551 system_pods.go:89] "kube-apiserver-addons-516593" [8d525cbf-1a1c-407d-a066-8f96d54e65ec] Running
	I0831 22:22:09.652507 1167551 system_pods.go:89] "kube-controller-manager-addons-516593" [a8f45426-fb2f-4033-8ccf-68ec169f06ce] Running
	I0831 22:22:09.652512 1167551 system_pods.go:89] "kube-ingress-dns-minikube" [ce27c9e8-d608-4e27-b658-25b6be9387a9] Running
	I0831 22:22:09.652520 1167551 system_pods.go:89] "kube-proxy-8tqf5" [c1866d47-ae1b-45a9-836b-d19852a82870] Running
	I0831 22:22:09.652525 1167551 system_pods.go:89] "kube-scheduler-addons-516593" [95866f50-759d-4cfb-8122-48a830cf0097] Running
	I0831 22:22:09.652536 1167551 system_pods.go:89] "metrics-server-84c5f94fbc-kmkrh" [04a55105-b90d-4b97-af79-0063fbcb110c] Running
	I0831 22:22:09.652541 1167551 system_pods.go:89] "nvidia-device-plugin-daemonset-bb285" [df44e7cc-7587-4732-a752-a37f7c187b90] Running
	I0831 22:22:09.652558 1167551 system_pods.go:89] "registry-6fb4cdfc84-wvmsl" [040a9b4e-0596-41c6-9739-2a1d51dfac80] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0831 22:22:09.652564 1167551 system_pods.go:89] "registry-proxy-z5ckz" [5f1deaf5-fb65-4500-bb12-b3e76411722b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0831 22:22:09.652576 1167551 system_pods.go:89] "snapshot-controller-56fcc65765-gv9mw" [c7b53e79-f922-4d17-8b35-83fbf0fed73d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:22:09.652584 1167551 system_pods.go:89] "snapshot-controller-56fcc65765-tmmfs" [0f63d262-c796-4952-95db-e8269c3e96ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:22:09.652592 1167551 system_pods.go:89] "storage-provisioner" [ea02dd28-a425-4ca7-8c85-f0b0f777217d] Running
	I0831 22:22:09.652600 1167551 system_pods.go:126] duration metric: took 135.770629ms to wait for k8s-apps to be running ...
	I0831 22:22:09.652612 1167551 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 22:22:09.652739 1167551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:22:09.685911 1167551 system_svc.go:56] duration metric: took 33.288779ms WaitForService to wait for kubelet
	I0831 22:22:09.685943 1167551 kubeadm.go:582] duration metric: took 30.063378625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:22:09.685962 1167551 node_conditions.go:102] verifying NodePressure condition ...
	I0831 22:22:09.847603 1167551 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 22:22:09.847639 1167551 node_conditions.go:123] node cpu capacity is 2
	I0831 22:22:09.847653 1167551 node_conditions.go:105] duration metric: took 161.685051ms to run NodePressure ...
	I0831 22:22:09.847668 1167551 start.go:241] waiting for startup goroutines ...
	I0831 22:22:10.110252 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:10.119705 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:10.120902 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:10.609922 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:10.710830 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:10.711500 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:11.110386 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:11.121572 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:11.122569 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:11.610320 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:11.617908 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:11.618562 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:12.110200 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:12.119185 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:12.119850 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:12.609474 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:12.618600 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:12.620950 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:13.109945 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:13.119817 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:13.120826 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:13.609501 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:13.619743 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:13.620818 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:14.110473 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:14.120006 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:14.121346 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:14.609934 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:14.618021 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:14.619568 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:15.109531 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:15.123449 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:15.127226 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:15.609942 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:15.619055 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:15.619950 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:16.110300 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:16.118581 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:16.120257 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:16.611462 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:16.619481 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:16.619908 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:17.110937 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:17.120485 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:17.121962 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:17.619789 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:17.622837 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:17.623267 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:18.110596 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:18.118550 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:18.121292 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:18.610778 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:18.620818 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:18.621892 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:19.110469 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:19.119940 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:22:19.122002 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:19.609882 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:19.618524 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:19.620032 1167551 kapi.go:107] duration metric: took 29.504787889s to wait for kubernetes.io/minikube-addons=registry ...
	I0831 22:22:20.110622 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:20.119341 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:20.610053 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:20.618550 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:21.111964 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:21.119400 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:21.609461 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:21.618062 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:22.109352 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:22.118379 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:22.609971 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:22.618717 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:23.109419 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:23.118029 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:23.614199 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:23.619629 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:24.109992 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:24.118507 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:24.608999 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:24.617993 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:25.110625 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:25.121904 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:25.609613 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:25.627452 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:26.109566 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:26.118335 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:26.611597 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:26.619105 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:27.109588 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:27.118297 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:27.608987 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:27.618450 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:28.110994 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:28.119128 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:28.611595 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:28.619545 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:29.110173 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:29.119220 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:29.609959 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:29.619105 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:30.110660 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:30.120593 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:30.610053 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:30.618169 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:31.111355 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:31.118133 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:31.620948 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:31.627483 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:32.109563 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:32.118989 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:32.613663 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:32.618823 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:33.110290 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:33.120551 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:33.610359 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:33.618575 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:34.111177 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:34.118745 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:34.614349 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:34.619205 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:35.124196 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:35.125693 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:35.611431 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:35.618132 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:36.110078 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:36.118025 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:36.610346 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:36.618580 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:37.109017 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:37.131659 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:37.609814 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:37.618516 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:38.112288 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:38.119289 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:38.610197 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:38.618111 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:39.110602 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:39.118510 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:39.609435 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:39.618816 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:40.109949 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:40.119164 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:40.615067 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:40.619138 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:41.110242 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:41.118355 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:41.609630 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:41.618485 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:42.111291 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:42.119419 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:42.609393 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:22:42.618319 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:43.110423 1167551 kapi.go:107] duration metric: took 52.005631429s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0831 22:22:43.118154 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:43.617814 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:44.119365 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:44.618962 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:45.130211 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:45.618421 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:46.118586 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:46.617796 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:47.118153 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:47.618249 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:48.119256 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:48.619171 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:49.118509 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:49.618490 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:50.119195 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:50.618344 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:51.118872 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:51.618034 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:52.119043 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:52.619393 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:53.118902 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:53.618871 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:54.122624 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:54.618772 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:55.124447 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:55.619720 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:56.118792 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:56.617977 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:57.118780 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:57.619204 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:58.118579 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:58.620028 1167551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:22:59.118011 1167551 kapi.go:107] duration metric: took 1m9.003994399s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0831 22:23:15.336232 1167551 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 22:23:15.336258 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:15.835000 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:16.336868 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:16.835371 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:17.335158 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:17.835047 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:18.335379 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:18.834604 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:19.335506 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:19.834534 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:20.335566 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:20.835294 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:21.335052 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:21.835001 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:22.335125 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:22.834738 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:23.335842 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:23.835352 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:24.335917 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:24.835480 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:25.335158 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:25.835889 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:26.336052 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:26.834847 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:27.335743 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:27.834560 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:28.339693 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:28.835231 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:29.336060 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:29.835189 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:30.335429 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:30.835449 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:31.335632 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:31.835667 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:32.335050 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:32.835408 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:33.335448 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:33.835602 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:34.336076 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:34.834893 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:35.335864 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:35.835995 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:36.335281 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:36.835919 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:37.335279 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:37.834958 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:38.335323 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:38.834786 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:39.335793 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:39.835303 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:40.334930 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:40.835801 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:41.336614 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:41.835894 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:42.336963 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:42.834657 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:43.335248 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:43.835470 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:44.335820 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:44.835813 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:45.335311 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:45.835763 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:46.334535 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:46.835374 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:47.335909 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:47.834495 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:48.335209 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:48.835742 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:49.336088 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:49.835264 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:50.334825 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:50.835308 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:51.335536 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:51.835383 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:52.334823 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:52.835227 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:53.334863 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:53.835423 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:54.334860 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:54.835349 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:55.342464 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:55.835129 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:56.335850 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:56.835739 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:57.335839 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:57.835218 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:58.335719 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:58.835000 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:59.335352 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:23:59.834749 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:00.346936 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:00.835001 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:01.335527 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:01.835588 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:02.335437 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:02.834840 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:03.334842 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:03.835266 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:04.336116 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:04.834650 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:05.335497 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:05.834714 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:06.336412 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:06.834997 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:07.334755 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:07.835898 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:08.334689 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:08.835487 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:09.335571 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:09.835043 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:10.335256 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:10.835407 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:11.334854 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:11.835718 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:12.335292 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:12.834645 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:13.335170 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:13.834784 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:14.335691 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:14.835300 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:15.335862 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:15.835537 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:16.335296 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:16.835861 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:17.335173 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:17.835063 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:18.335291 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:18.835133 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:19.334835 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:19.835810 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:20.335474 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:20.835493 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:21.335159 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:21.834951 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:22.335073 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:22.838717 1167551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:24:23.334976 1167551 kapi.go:107] duration metric: took 2m31.003451599s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0831 22:24:23.336919 1167551 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-516593 cluster.
	I0831 22:24:23.338490 1167551 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0831 22:24:23.340107 1167551 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0831 22:24:23.342138 1167551 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, default-storageclass, volcano, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0831 22:24:23.343961 1167551 addons.go:510] duration metric: took 2m43.721132656s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner default-storageclass volcano metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0831 22:24:23.344006 1167551 start.go:246] waiting for cluster config update ...
	I0831 22:24:23.344028 1167551 start.go:255] writing updated cluster config ...
	I0831 22:24:23.344329 1167551 ssh_runner.go:195] Run: rm -f paused
	I0831 22:24:23.694836 1167551 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 22:24:23.697627 1167551 out.go:177] * Done! kubectl is now configured to use "addons-516593" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	82c177de3970e       e2d3313f65753       2 minutes ago       Exited              gadget                                   5                   0ffa4e72d320a       gadget-p7vk8
	5c5ee1711d2f5       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   ec7aba2364ca2       gcp-auth-89d5ffd79-zfc94
	82a88bf393a00       8b46b1cd48760       4 minutes ago       Running             admission                                0                   90e1e4fc2820c       volcano-admission-77d7d48b68-pg6tc
	8036ff0b0638c       289a818c8d9c5       4 minutes ago       Running             controller                               0                   e06a72c0af254       ingress-nginx-controller-bc57996ff-hk8dm
	c38afbc5117ae       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   c764e2097d39b       csi-hostpathplugin-kqw2s
	63208c3de18ad       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   c764e2097d39b       csi-hostpathplugin-kqw2s
	4180532dae614       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   c764e2097d39b       csi-hostpathplugin-kqw2s
	1ce51d43df610       420193b27261a       5 minutes ago       Exited              patch                                    2                   cc041695f7c52       ingress-nginx-admission-patch-2mgjh
	008468ff90fb4       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   c764e2097d39b       csi-hostpathplugin-kqw2s
	cb7506babfa9e       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   c764e2097d39b       csi-hostpathplugin-kqw2s
	d1066ee4be901       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   c764e2097d39b       csi-hostpathplugin-kqw2s
	75e714af4edb3       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   497de999088ff       csi-hostpath-resizer-0
	d0a2a375ab955       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   fbaf67a83ef1d       volcano-controllers-56675bb4d5-qbpd8
	b811de3ab266a       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   0ec6a03b655e5       csi-hostpath-attacher-0
	ea0aaf9672310       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   cd70cd73bad26       snapshot-controller-56fcc65765-gv9mw
	4214ba94781f0       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   4bdf638f549cf       volcano-scheduler-576bc46687-g444f
	5507e4f5a4ae5       420193b27261a       5 minutes ago       Exited              create                                   0                   75770c5fb9dea       ingress-nginx-admission-create-ch2c8
	4b7146c6986fc       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   a23c2c86057f6       snapshot-controller-56fcc65765-tmmfs
	70d72c9bce2f7       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   b7c5b26df8bae       registry-proxy-z5ckz
	bb6e63e5b3dbe       8be4bcf8ec607       5 minutes ago       Running             cloud-spanner-emulator                   0                   8df76984fc2c5       cloud-spanner-emulator-769b77f747-5v7l9
	989fe0e393621       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   91fb4755cd920       local-path-provisioner-86d989889c-jrkqw
	c1ca5a156b2d2       6fed88f43b276       5 minutes ago       Running             registry                                 0                   a419718a7f399       registry-6fb4cdfc84-wvmsl
	c3ea25c744386       77bdba588b953       5 minutes ago       Running             yakd                                     0                   b390f13024ae4       yakd-dashboard-67d98fc6b-d5s6m
	28514fb384e4a       2437cf7621777       5 minutes ago       Running             coredns                                  0                   e2ff6a227c5b6       coredns-6f6b679f8f-8d6s7
	3798776efff8a       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   85f454319a49b       metrics-server-84c5f94fbc-kmkrh
	134b40fe53b45       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   56104e9388f48       nvidia-device-plugin-daemonset-bb285
	0c58f1ff9fd81       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   12e6ff37330e8       kube-ingress-dns-minikube
	35b7fc9faaad9       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   d9c5bf0bbb757       storage-provisioner
	2a20cb3459063       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                              0                   0adf8f8318a71       kindnet-qbd29
	ad060a199cdac       71d55d66fd4ee       6 minutes ago       Running             kube-proxy                               0                   5acd8e0fb2d83       kube-proxy-8tqf5
	c14e6f1573afe       fcb0683e6bdbd       6 minutes ago       Running             kube-controller-manager                  0                   7ce84fbc0a9cf       kube-controller-manager-addons-516593
	85e97bb3f4477       cd0f0ae0ec9e0       6 minutes ago       Running             kube-apiserver                           0                   ce82a329e7f64       kube-apiserver-addons-516593
	93b1c2f4f4855       27e3830e14027       6 minutes ago       Running             etcd                                     0                   36a7a2f80aeb3       etcd-addons-516593
	a6d9b889aec2a       fbbbd428abb4d       6 minutes ago       Running             kube-scheduler                           0                   c41826db3616b       kube-scheduler-addons-516593
	
	
	==> containerd <==
	Aug 31 22:25:22 addons-516593 containerd[810]: time="2024-08-31T22:25:22.000515643Z" level=info msg="CreateContainer within sandbox \"0ffa4e72d320a3fd777535f72dc9a6e70ac002f6698093ac27756134d982ab38\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Aug 31 22:25:22 addons-516593 containerd[810]: time="2024-08-31T22:25:22.038953581Z" level=info msg="CreateContainer within sandbox \"0ffa4e72d320a3fd777535f72dc9a6e70ac002f6698093ac27756134d982ab38\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a\""
	Aug 31 22:25:22 addons-516593 containerd[810]: time="2024-08-31T22:25:22.039815231Z" level=info msg="StartContainer for \"82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a\""
	Aug 31 22:25:22 addons-516593 containerd[810]: time="2024-08-31T22:25:22.100712864Z" level=info msg="StartContainer for \"82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a\" returns successfully"
	Aug 31 22:25:23 addons-516593 containerd[810]: time="2024-08-31T22:25:23.200175309Z" level=error msg="ExecSync for \"82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a\" failed" error="failed to exec in container: failed to start exec \"782bfd171c17f207d83eebd319cc5fc4628c008d5f9024dec06b766ccae366c6\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 31 22:25:23 addons-516593 containerd[810]: time="2024-08-31T22:25:23.213671360Z" level=error msg="ExecSync for \"82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a\" failed" error="failed to exec in container: failed to start exec \"5387e30cdcf4ad902fe88f538156058bcf2b1cd657eb2e7ce61c2f40010c23ef\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 31 22:25:23 addons-516593 containerd[810]: time="2024-08-31T22:25:23.228108807Z" level=error msg="ExecSync for \"82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a\" failed" error="failed to exec in container: failed to start exec \"0a76965c51ace5754df5d28f20227acbe1e2e9bbcfa85182792c4e064167f16b\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 31 22:25:23 addons-516593 containerd[810]: time="2024-08-31T22:25:23.240553916Z" level=error msg="ExecSync for \"82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a\" failed" error="failed to exec in container: failed to start exec \"4e02d543b3ca7fbf21d8cea6a7cb6751dee570c1716dbd6800c5869f1c2912d6\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 31 22:25:23 addons-516593 containerd[810]: time="2024-08-31T22:25:23.253144642Z" level=error msg="ExecSync for \"82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a\" failed" error="failed to exec in container: failed to start exec \"0baff0c57a804ac5b5563585418db69845453bcf5e33bac73a12109c60212a46\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 31 22:25:23 addons-516593 containerd[810]: time="2024-08-31T22:25:23.262439370Z" level=error msg="ExecSync for \"82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a\" failed" error="failed to exec in container: failed to start exec \"b1af510e6691ecaaa9f596788b65dc71a51fc3571f560d273e370d74bccb2041\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 31 22:25:23 addons-516593 containerd[810]: time="2024-08-31T22:25:23.366520098Z" level=info msg="shim disconnected" id=82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a namespace=k8s.io
	Aug 31 22:25:23 addons-516593 containerd[810]: time="2024-08-31T22:25:23.366582227Z" level=warning msg="cleaning up after shim disconnected" id=82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a namespace=k8s.io
	Aug 31 22:25:23 addons-516593 containerd[810]: time="2024-08-31T22:25:23.366593262Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 31 22:25:24 addons-516593 containerd[810]: time="2024-08-31T22:25:24.228513071Z" level=info msg="RemoveContainer for \"9f57feb7095b936df3beeea402d56e6d040bcab3140a0260a99a7e53e8be1fae\""
	Aug 31 22:25:24 addons-516593 containerd[810]: time="2024-08-31T22:25:24.238179490Z" level=info msg="RemoveContainer for \"9f57feb7095b936df3beeea402d56e6d040bcab3140a0260a99a7e53e8be1fae\" returns successfully"
	Aug 31 22:25:34 addons-516593 containerd[810]: time="2024-08-31T22:25:34.986414171Z" level=info msg="RemoveContainer for \"c18946c76d415ce7b7b14ad1f72d69e494ef1eb19439c5f0096793a970e90d40\""
	Aug 31 22:25:34 addons-516593 containerd[810]: time="2024-08-31T22:25:34.992269437Z" level=info msg="RemoveContainer for \"c18946c76d415ce7b7b14ad1f72d69e494ef1eb19439c5f0096793a970e90d40\" returns successfully"
	Aug 31 22:25:34 addons-516593 containerd[810]: time="2024-08-31T22:25:34.994849202Z" level=info msg="StopPodSandbox for \"f53da3e13d7b057e6c6b6074b14cebcf6769ffe6b208bb652b835331ddcf7181\""
	Aug 31 22:25:35 addons-516593 containerd[810]: time="2024-08-31T22:25:35.018616208Z" level=info msg="TearDown network for sandbox \"f53da3e13d7b057e6c6b6074b14cebcf6769ffe6b208bb652b835331ddcf7181\" successfully"
	Aug 31 22:25:35 addons-516593 containerd[810]: time="2024-08-31T22:25:35.018663076Z" level=info msg="StopPodSandbox for \"f53da3e13d7b057e6c6b6074b14cebcf6769ffe6b208bb652b835331ddcf7181\" returns successfully"
	Aug 31 22:25:35 addons-516593 containerd[810]: time="2024-08-31T22:25:35.019236801Z" level=info msg="RemovePodSandbox for \"f53da3e13d7b057e6c6b6074b14cebcf6769ffe6b208bb652b835331ddcf7181\""
	Aug 31 22:25:35 addons-516593 containerd[810]: time="2024-08-31T22:25:35.019345765Z" level=info msg="Forcibly stopping sandbox \"f53da3e13d7b057e6c6b6074b14cebcf6769ffe6b208bb652b835331ddcf7181\""
	Aug 31 22:25:35 addons-516593 containerd[810]: time="2024-08-31T22:25:35.032093454Z" level=info msg="TearDown network for sandbox \"f53da3e13d7b057e6c6b6074b14cebcf6769ffe6b208bb652b835331ddcf7181\" successfully"
	Aug 31 22:25:35 addons-516593 containerd[810]: time="2024-08-31T22:25:35.041574085Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f53da3e13d7b057e6c6b6074b14cebcf6769ffe6b208bb652b835331ddcf7181\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 31 22:25:35 addons-516593 containerd[810]: time="2024-08-31T22:25:35.041700534Z" level=info msg="RemovePodSandbox \"f53da3e13d7b057e6c6b6074b14cebcf6769ffe6b208bb652b835331ddcf7181\" returns successfully"
	
	
	==> coredns [28514fb384e4a2a1ae2c5015ac701538dd2d2dd901c3f8dc203dc017d73900fb] <==
	[INFO] 10.244.0.6:47444 - 24465 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007456s
	[INFO] 10.244.0.6:46341 - 41902 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002381407s
	[INFO] 10.244.0.6:46341 - 13737 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002297945s
	[INFO] 10.244.0.6:57785 - 36254 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009316s
	[INFO] 10.244.0.6:57785 - 37788 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00009627s
	[INFO] 10.244.0.6:57865 - 47523 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000095869s
	[INFO] 10.244.0.6:57865 - 21166 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000039138s
	[INFO] 10.244.0.6:37609 - 48916 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000053825s
	[INFO] 10.244.0.6:37609 - 9065 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000040394s
	[INFO] 10.244.0.6:43019 - 10105 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000062408s
	[INFO] 10.244.0.6:43019 - 30843 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000040123s
	[INFO] 10.244.0.6:60881 - 23511 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001827456s
	[INFO] 10.244.0.6:60881 - 42453 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001789392s
	[INFO] 10.244.0.6:47754 - 18721 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000076439s
	[INFO] 10.244.0.6:47754 - 39203 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000052603s
	[INFO] 10.244.0.24:44348 - 49664 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00201788s
	[INFO] 10.244.0.24:52739 - 49698 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002040362s
	[INFO] 10.244.0.24:34813 - 8990 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151245s
	[INFO] 10.244.0.24:46931 - 10291 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00010007s
	[INFO] 10.244.0.24:48346 - 13723 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000131281s
	[INFO] 10.244.0.24:36515 - 17131 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123035s
	[INFO] 10.244.0.24:54253 - 44029 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002016427s
	[INFO] 10.244.0.24:52656 - 38421 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001841971s
	[INFO] 10.244.0.24:43484 - 60147 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001550149s
	[INFO] 10.244.0.24:45912 - 18422 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001679469s
	
	
	==> describe nodes <==
	Name:               addons-516593
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-516593
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=addons-516593
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_21_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-516593
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-516593"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:21:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-516593
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:27:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:24:39 +0000   Sat, 31 Aug 2024 22:21:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:24:39 +0000   Sat, 31 Aug 2024 22:21:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:24:39 +0000   Sat, 31 Aug 2024 22:21:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:24:39 +0000   Sat, 31 Aug 2024 22:21:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-516593
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 4761731680224acd94e0ae3d8b089e3c
	  System UUID:                b69e4db4-0e73-404d-9404-1bf9c615e5a6
	  Boot ID:                    844307fd-f17e-4b74-a327-71aead28c204
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.21
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-5v7l9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  gadget                      gadget-p7vk8                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  gcp-auth                    gcp-auth-89d5ffd79-zfc94                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-hk8dm    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m54s
	  kube-system                 coredns-6f6b679f8f-8d6s7                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m2s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpathplugin-kqw2s                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-516593                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m7s
	  kube-system                 kindnet-qbd29                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-516593                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-controller-manager-addons-516593       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-proxy-8tqf5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-scheduler-addons-516593                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 metrics-server-84c5f94fbc-kmkrh             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m57s
	  kube-system                 nvidia-device-plugin-daemonset-bb285        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-6fb4cdfc84-wvmsl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-proxy-z5ckz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 snapshot-controller-56fcc65765-gv9mw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 snapshot-controller-56fcc65765-tmmfs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  local-path-storage          local-path-provisioner-86d989889c-jrkqw     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  volcano-system              volcano-admission-77d7d48b68-pg6tc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-56675bb4d5-qbpd8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-scheduler-576bc46687-g444f          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-d5s6m              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m1s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  6m16s (x8 over 6m16s)  kubelet          Node addons-516593 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m16s (x7 over 6m16s)  kubelet          Node addons-516593 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m16s (x7 over 6m16s)  kubelet          Node addons-516593 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m8s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m7s                   kubelet          Node addons-516593 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m7s                   kubelet          Node addons-516593 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m7s                   kubelet          Node addons-516593 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m3s                   node-controller  Node addons-516593 event: Registered Node addons-516593 in Controller
	
	
	==> dmesg <==
	[Aug31 19:56] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [93b1c2f4f485556bee1c3f50d6cd4c58aad907acda359a84795975a37b856810] <==
	{"level":"info","ts":"2024-08-31T22:21:27.577555Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-08-31T22:21:27.572943Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-31T22:21:27.577624Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-31T22:21:27.578703Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-31T22:21:27.578728Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-31T22:21:28.456655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-31T22:21:28.456853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-31T22:21:28.456972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-31T22:21:28.457088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-31T22:21:28.457164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-31T22:21:28.457250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-31T22:21:28.457316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-31T22:21:28.460745Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:21:28.462299Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-516593 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-31T22:21:28.462346Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:21:28.462726Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:21:28.462934Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:21:28.463035Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:21:28.463157Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:21:28.469354Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T22:21:28.473527Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-31T22:21:28.481260Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T22:21:28.482559Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-31T22:21:28.481302Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-31T22:21:28.488852Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [5c5ee1711d2f5a0f1951d47e7bad027ebcb050f58f98e5cfc866b7b95c91ca96] <==
	2024/08/31 22:24:22 GCP Auth Webhook started!
	2024/08/31 22:24:40 Ready to marshal response ...
	2024/08/31 22:24:40 Ready to write response ...
	2024/08/31 22:24:40 Ready to marshal response ...
	2024/08/31 22:24:40 Ready to write response ...
	
	
	==> kernel <==
	 22:27:42 up  6:10,  0 users,  load average: 0.25, 1.46, 2.50
	Linux addons-516593 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [2a20cb3459063c26bd497a40fae53556ac9f275869965f4433f05bf61787d5eb] <==
	I0831 22:25:33.509284       1 main.go:299] handling current node
	I0831 22:25:43.509885       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:25:43.509919       1 main.go:299] handling current node
	I0831 22:25:53.509259       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:25:53.509297       1 main.go:299] handling current node
	I0831 22:26:03.509603       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:26:03.509637       1 main.go:299] handling current node
	I0831 22:26:13.509151       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:26:13.510548       1 main.go:299] handling current node
	I0831 22:26:23.509281       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:26:23.509318       1 main.go:299] handling current node
	I0831 22:26:33.508919       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:26:33.508957       1 main.go:299] handling current node
	I0831 22:26:43.509129       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:26:43.509166       1 main.go:299] handling current node
	I0831 22:26:53.508883       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:26:53.508929       1 main.go:299] handling current node
	I0831 22:27:03.509786       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:27:03.509821       1 main.go:299] handling current node
	I0831 22:27:13.509277       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:27:13.509310       1 main.go:299] handling current node
	I0831 22:27:23.509551       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:27:23.509587       1 main.go:299] handling current node
	I0831 22:27:33.509731       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:27:33.509766       1 main.go:299] handling current node
	
	
	==> kube-apiserver [85e97bb3f4477f2709721e348c710b274202cbac4f29bdbdf864f46b9c488c3c] <==
	W0831 22:22:54.991329       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.121.194:443: connect: connection refused
	W0831 22:22:55.291606       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.196.255:443: connect: connection refused
	E0831 22:22:55.291651       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.196.255:443: connect: connection refused" logger="UnhandledError"
	W0831 22:22:55.293396       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.121.194:443: connect: connection refused
	W0831 22:22:55.350657       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.196.255:443: connect: connection refused
	E0831 22:22:55.350696       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.196.255:443: connect: connection refused" logger="UnhandledError"
	W0831 22:22:55.352749       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.121.194:443: connect: connection refused
	W0831 22:22:56.013360       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.121.194:443: connect: connection refused
	W0831 22:22:57.026718       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.121.194:443: connect: connection refused
	W0831 22:22:58.038983       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.121.194:443: connect: connection refused
	W0831 22:22:59.068453       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.121.194:443: connect: connection refused
	W0831 22:23:00.110198       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.121.194:443: connect: connection refused
	W0831 22:23:01.174149       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.121.194:443: connect: connection refused
	W0831 22:23:02.198018       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.121.194:443: connect: connection refused
	W0831 22:23:03.264346       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.121.194:443: connect: connection refused
	W0831 22:23:04.326885       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.121.194:443: connect: connection refused
	W0831 22:23:05.387979       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.121.194:443: connect: connection refused
	W0831 22:23:15.238635       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.196.255:443: connect: connection refused
	E0831 22:23:15.238680       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.196.255:443: connect: connection refused" logger="UnhandledError"
	W0831 22:23:55.302684       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.196.255:443: connect: connection refused
	E0831 22:23:55.302873       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.196.255:443: connect: connection refused" logger="UnhandledError"
	W0831 22:23:55.358582       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.196.255:443: connect: connection refused
	E0831 22:23:55.358868       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.196.255:443: connect: connection refused" logger="UnhandledError"
	I0831 22:24:40.267380       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0831 22:24:40.298539       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [c14e6f1573afe144a9624a66265f0c8ef6cabf5a61aa62b45e2ab6ae80595844] <==
	I0831 22:23:55.353530       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0831 22:23:55.369420       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0831 22:23:55.378596       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0831 22:23:55.389827       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0831 22:23:55.396121       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0831 22:23:57.012284       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0831 22:23:57.051944       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0831 22:23:57.960950       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0831 22:23:58.092025       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0831 22:23:58.971240       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0831 22:23:59.057234       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0831 22:23:59.098944       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0831 22:23:59.108358       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0831 22:23:59.114097       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0831 22:23:59.981072       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0831 22:23:59.988007       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0831 22:23:59.996112       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0831 22:24:23.061253       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="13.522891ms"
	I0831 22:24:23.061911       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="42.273µs"
	I0831 22:24:29.021203       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0831 22:24:29.025577       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0831 22:24:29.066358       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0831 22:24:29.072819       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0831 22:24:39.725857       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-516593"
	I0831 22:24:39.927455       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [ad060a199cdac3a1e455b69073c1b0c34123794dc8bfc0b487a9ecee4d7f046d] <==
	I0831 22:21:41.032353       1 server_linux.go:66] "Using iptables proxy"
	I0831 22:21:41.139906       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0831 22:21:41.139975       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:21:41.179158       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0831 22:21:41.179222       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:21:41.183326       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:21:41.183886       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:21:41.183904       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:21:41.195440       1 config.go:197] "Starting service config controller"
	I0831 22:21:41.195478       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:21:41.195551       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:21:41.195562       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:21:41.195979       1 config.go:326] "Starting node config controller"
	I0831 22:21:41.195988       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:21:41.295626       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0831 22:21:41.295709       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:21:41.296008       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a6d9b889aec2a49566d34a2ceb3931bba2f645710bae45a136ab1d7d6a43db48] <==
	W0831 22:21:32.386735       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0831 22:21:32.386811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:21:32.386986       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 22:21:32.387062       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:21:32.387178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0831 22:21:32.387238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:21:32.387355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0831 22:21:32.387435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:21:32.387542       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 22:21:32.387629       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:21:33.239159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0831 22:21:33.239204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:21:33.331318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0831 22:21:33.331577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:21:33.338718       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0831 22:21:33.338946       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:21:33.363891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0831 22:21:33.364107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:21:33.442882       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:21:33.445291       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 22:21:33.486150       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0831 22:21:33.486436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:21:33.584023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:21:33.584276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0831 22:21:35.361585       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 31 22:25:41 addons-516593 kubelet[1466]: I0831 22:25:41.867780    1466 scope.go:117] "RemoveContainer" containerID="82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a"
	Aug 31 22:25:41 addons-516593 kubelet[1466]: E0831 22:25:41.867993    1466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p7vk8_gadget(55624912-c55a-43e2-84cf-47fb046ffe89)\"" pod="gadget/gadget-p7vk8" podUID="55624912-c55a-43e2-84cf-47fb046ffe89"
	Aug 31 22:25:55 addons-516593 kubelet[1466]: I0831 22:25:55.867476    1466 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-bb285" secret="" err="secret \"gcp-auth\" not found"
	Aug 31 22:25:56 addons-516593 kubelet[1466]: I0831 22:25:56.867024    1466 scope.go:117] "RemoveContainer" containerID="82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a"
	Aug 31 22:25:56 addons-516593 kubelet[1466]: E0831 22:25:56.867225    1466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p7vk8_gadget(55624912-c55a-43e2-84cf-47fb046ffe89)\"" pod="gadget/gadget-p7vk8" podUID="55624912-c55a-43e2-84cf-47fb046ffe89"
	Aug 31 22:26:03 addons-516593 kubelet[1466]: I0831 22:26:03.867516    1466 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-wvmsl" secret="" err="secret \"gcp-auth\" not found"
	Aug 31 22:26:09 addons-516593 kubelet[1466]: I0831 22:26:09.867416    1466 scope.go:117] "RemoveContainer" containerID="82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a"
	Aug 31 22:26:09 addons-516593 kubelet[1466]: E0831 22:26:09.867636    1466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p7vk8_gadget(55624912-c55a-43e2-84cf-47fb046ffe89)\"" pod="gadget/gadget-p7vk8" podUID="55624912-c55a-43e2-84cf-47fb046ffe89"
	Aug 31 22:26:20 addons-516593 kubelet[1466]: I0831 22:26:20.866965    1466 scope.go:117] "RemoveContainer" containerID="82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a"
	Aug 31 22:26:20 addons-516593 kubelet[1466]: E0831 22:26:20.867937    1466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p7vk8_gadget(55624912-c55a-43e2-84cf-47fb046ffe89)\"" pod="gadget/gadget-p7vk8" podUID="55624912-c55a-43e2-84cf-47fb046ffe89"
	Aug 31 22:26:24 addons-516593 kubelet[1466]: I0831 22:26:24.868671    1466 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-z5ckz" secret="" err="secret \"gcp-auth\" not found"
	Aug 31 22:26:34 addons-516593 kubelet[1466]: I0831 22:26:34.867707    1466 scope.go:117] "RemoveContainer" containerID="82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a"
	Aug 31 22:26:34 addons-516593 kubelet[1466]: E0831 22:26:34.867894    1466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p7vk8_gadget(55624912-c55a-43e2-84cf-47fb046ffe89)\"" pod="gadget/gadget-p7vk8" podUID="55624912-c55a-43e2-84cf-47fb046ffe89"
	Aug 31 22:26:48 addons-516593 kubelet[1466]: I0831 22:26:48.867001    1466 scope.go:117] "RemoveContainer" containerID="82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a"
	Aug 31 22:26:48 addons-516593 kubelet[1466]: E0831 22:26:48.867671    1466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p7vk8_gadget(55624912-c55a-43e2-84cf-47fb046ffe89)\"" pod="gadget/gadget-p7vk8" podUID="55624912-c55a-43e2-84cf-47fb046ffe89"
	Aug 31 22:27:00 addons-516593 kubelet[1466]: I0831 22:27:00.866840    1466 scope.go:117] "RemoveContainer" containerID="82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a"
	Aug 31 22:27:00 addons-516593 kubelet[1466]: E0831 22:27:00.867483    1466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p7vk8_gadget(55624912-c55a-43e2-84cf-47fb046ffe89)\"" pod="gadget/gadget-p7vk8" podUID="55624912-c55a-43e2-84cf-47fb046ffe89"
	Aug 31 22:27:14 addons-516593 kubelet[1466]: I0831 22:27:14.867997    1466 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-bb285" secret="" err="secret \"gcp-auth\" not found"
	Aug 31 22:27:14 addons-516593 kubelet[1466]: I0831 22:27:14.871552    1466 scope.go:117] "RemoveContainer" containerID="82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a"
	Aug 31 22:27:14 addons-516593 kubelet[1466]: E0831 22:27:14.871714    1466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p7vk8_gadget(55624912-c55a-43e2-84cf-47fb046ffe89)\"" pod="gadget/gadget-p7vk8" podUID="55624912-c55a-43e2-84cf-47fb046ffe89"
	Aug 31 22:27:25 addons-516593 kubelet[1466]: I0831 22:27:25.867073    1466 scope.go:117] "RemoveContainer" containerID="82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a"
	Aug 31 22:27:25 addons-516593 kubelet[1466]: E0831 22:27:25.867265    1466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p7vk8_gadget(55624912-c55a-43e2-84cf-47fb046ffe89)\"" pod="gadget/gadget-p7vk8" podUID="55624912-c55a-43e2-84cf-47fb046ffe89"
	Aug 31 22:27:26 addons-516593 kubelet[1466]: I0831 22:27:26.866815    1466 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-wvmsl" secret="" err="secret \"gcp-auth\" not found"
	Aug 31 22:27:40 addons-516593 kubelet[1466]: I0831 22:27:40.866782    1466 scope.go:117] "RemoveContainer" containerID="82c177de3970e966f9dc53130284b6d94d1fc1030f2a93635db89b28bd6c150a"
	Aug 31 22:27:40 addons-516593 kubelet[1466]: E0831 22:27:40.867010    1466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p7vk8_gadget(55624912-c55a-43e2-84cf-47fb046ffe89)\"" pod="gadget/gadget-p7vk8" podUID="55624912-c55a-43e2-84cf-47fb046ffe89"
	
	
	==> storage-provisioner [35b7fc9faaad96a693adad6252138d5c41bc3cd4c75aaf1bdefbfaea6550df93] <==
	I0831 22:21:46.236764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 22:21:46.292035       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 22:21:46.292108       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 22:21:46.300776       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 22:21:46.301361       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"38351ae9-df48-4c11-b2e6-c794952eb73f", APIVersion:"v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-516593_6523f459-759a-4930-8116-ebd0ae4cf929 became leader
	I0831 22:21:46.301393       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-516593_6523f459-759a-4930-8116-ebd0ae4cf929!
	I0831 22:21:46.407031       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-516593_6523f459-759a-4930-8116-ebd0ae4cf929!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-516593 -n addons-516593
helpers_test.go:262: (dbg) Run:  kubectl --context addons-516593 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: ingress-nginx-admission-create-ch2c8 ingress-nginx-admission-patch-2mgjh test-job-nginx-0
helpers_test.go:275: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context addons-516593 describe pod ingress-nginx-admission-create-ch2c8 ingress-nginx-admission-patch-2mgjh test-job-nginx-0
helpers_test.go:278: (dbg) Non-zero exit: kubectl --context addons-516593 describe pod ingress-nginx-admission-create-ch2c8 ingress-nginx-admission-patch-2mgjh test-job-nginx-0: exit status 1 (94.662511ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ch2c8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2mgjh" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:280: kubectl --context addons-516593 describe pod ingress-nginx-admission-create-ch2c8 ingress-nginx-admission-patch-2mgjh test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (373.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-777320 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0831 23:11:40.634260 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-777320 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m9.254422485s)

                                                
                                                
-- stdout --
	* [old-k8s-version-777320] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-777320" primary control-plane node in "old-k8s-version-777320" cluster
	* Pulling base image v0.0.44-1724862063-19530 ...
	* Restarting existing docker container for "old-k8s-version-777320" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.21 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-777320 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, dashboard, metrics-server
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 23:11:30.706743 1378731 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:11:30.706963 1378731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:11:30.706990 1378731 out.go:358] Setting ErrFile to fd 2...
	I0831 23:11:30.707010 1378731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:11:30.707299 1378731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
	I0831 23:11:30.707734 1378731 out.go:352] Setting JSON to false
	I0831 23:11:30.709132 1378731 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24840,"bootTime":1725121051,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0831 23:11:30.709398 1378731 start.go:139] virtualization:  
	I0831 23:11:30.715577 1378731 out.go:177] * [old-k8s-version-777320] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 23:11:30.718070 1378731 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 23:11:30.718140 1378731 notify.go:220] Checking for updates...
	I0831 23:11:30.722003 1378731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 23:11:30.724451 1378731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	I0831 23:11:30.726403 1378731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	I0831 23:11:30.728429 1378731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 23:11:30.730457 1378731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 23:11:30.732992 1378731 config.go:182] Loaded profile config "old-k8s-version-777320": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0831 23:11:30.735575 1378731 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0831 23:11:30.737259 1378731 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 23:11:30.769713 1378731 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 23:11:30.769834 1378731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 23:11:30.858270 1378731 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-31 23:11:30.846235607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 23:11:30.858381 1378731 docker.go:307] overlay module found
	I0831 23:11:30.861100 1378731 out.go:177] * Using the docker driver based on existing profile
	I0831 23:11:30.862835 1378731 start.go:297] selected driver: docker
	I0831 23:11:30.862849 1378731 start.go:901] validating driver "docker" against &{Name:old-k8s-version-777320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-777320 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:11:30.862960 1378731 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 23:11:30.863566 1378731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 23:11:30.953619 1378731 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-31 23:11:30.943670081 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 23:11:30.953958 1378731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 23:11:30.953989 1378731 cni.go:84] Creating CNI manager for ""
	I0831 23:11:30.954002 1378731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0831 23:11:30.954039 1378731 start.go:340] cluster config:
	{Name:old-k8s-version-777320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-777320 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:11:30.957787 1378731 out.go:177] * Starting "old-k8s-version-777320" primary control-plane node in "old-k8s-version-777320" cluster
	I0831 23:11:30.959753 1378731 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0831 23:11:30.961447 1378731 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 23:11:30.962980 1378731 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0831 23:11:30.963043 1378731 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0831 23:11:30.963050 1378731 cache.go:56] Caching tarball of preloaded images
	I0831 23:11:30.963128 1378731 preload.go:172] Found /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 23:11:30.963137 1378731 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0831 23:11:30.963251 1378731 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/config.json ...
	I0831 23:11:30.963465 1378731 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	W0831 23:11:30.989076 1378731 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 is of wrong architecture
	I0831 23:11:30.989092 1378731 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 23:11:30.989161 1378731 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 23:11:30.989178 1378731 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 23:11:30.989182 1378731 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 23:11:30.989189 1378731 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 23:11:30.989195 1378731 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 23:11:31.139693 1378731 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 23:11:31.139733 1378731 cache.go:194] Successfully downloaded all kic artifacts
	I0831 23:11:31.139777 1378731 start.go:360] acquireMachinesLock for old-k8s-version-777320: {Name:mk54b97042efa3274c921174b5e60d0f3ab02127 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:11:31.139853 1378731 start.go:364] duration metric: took 51.922µs to acquireMachinesLock for "old-k8s-version-777320"
	I0831 23:11:31.139878 1378731 start.go:96] Skipping create...Using existing machine configuration
	I0831 23:11:31.139885 1378731 fix.go:54] fixHost starting: 
	I0831 23:11:31.140170 1378731 cli_runner.go:164] Run: docker container inspect old-k8s-version-777320 --format={{.State.Status}}
	I0831 23:11:31.162220 1378731 fix.go:112] recreateIfNeeded on old-k8s-version-777320: state=Stopped err=<nil>
	W0831 23:11:31.162256 1378731 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 23:11:31.166433 1378731 out.go:177] * Restarting existing docker container for "old-k8s-version-777320" ...
	I0831 23:11:31.168284 1378731 cli_runner.go:164] Run: docker start old-k8s-version-777320
	I0831 23:11:31.558567 1378731 cli_runner.go:164] Run: docker container inspect old-k8s-version-777320 --format={{.State.Status}}
	I0831 23:11:31.586417 1378731 kic.go:435] container "old-k8s-version-777320" state is running.
	I0831 23:11:31.586802 1378731 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "old-k8s-version-777320")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-777320
	I0831 23:11:31.612062 1378731 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/config.json ...
	I0831 23:11:31.612291 1378731 machine.go:93] provisionDockerMachine start ...
	I0831 23:11:31.612367 1378731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-777320
	I0831 23:11:31.649638 1378731 main.go:141] libmachine: Using SSH client type: native
	I0831 23:11:31.649905 1378731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34569 <nil> <nil>}
	I0831 23:11:31.649921 1378731 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 23:11:31.650786 1378731 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0831 23:11:34.804569 1378731 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-777320
	
	I0831 23:11:34.804600 1378731 ubuntu.go:169] provisioning hostname "old-k8s-version-777320"
	I0831 23:11:34.804704 1378731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-777320
	I0831 23:11:34.837403 1378731 main.go:141] libmachine: Using SSH client type: native
	I0831 23:11:34.837698 1378731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34569 <nil> <nil>}
	I0831 23:11:34.837709 1378731 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-777320 && echo "old-k8s-version-777320" | sudo tee /etc/hostname
	I0831 23:11:34.997894 1378731 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-777320
	
	I0831 23:11:34.997977 1378731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-777320
	I0831 23:11:35.026247 1378731 main.go:141] libmachine: Using SSH client type: native
	I0831 23:11:35.026523 1378731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34569 <nil> <nil>}
	I0831 23:11:35.026540 1378731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-777320' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-777320/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-777320' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 23:11:35.169649 1378731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 23:11:35.169680 1378731 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-1161402/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-1161402/.minikube}
	I0831 23:11:35.169702 1378731 ubuntu.go:177] setting up certificates
	I0831 23:11:35.169711 1378731 provision.go:84] configureAuth start
	I0831 23:11:35.169798 1378731 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "old-k8s-version-777320")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-777320
	I0831 23:11:35.198374 1378731 provision.go:143] copyHostCerts
	I0831 23:11:35.198447 1378731 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.pem, removing ...
	I0831 23:11:35.198461 1378731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.pem
	I0831 23:11:35.198542 1378731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.pem (1078 bytes)
	I0831 23:11:35.198657 1378731 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-1161402/.minikube/cert.pem, removing ...
	I0831 23:11:35.198669 1378731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-1161402/.minikube/cert.pem
	I0831 23:11:35.198698 1378731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-1161402/.minikube/cert.pem (1123 bytes)
	I0831 23:11:35.198767 1378731 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-1161402/.minikube/key.pem, removing ...
	I0831 23:11:35.198778 1378731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-1161402/.minikube/key.pem
	I0831 23:11:35.198802 1378731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-1161402/.minikube/key.pem (1679 bytes)
	I0831 23:11:35.198866 1378731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-777320 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-777320]
	I0831 23:11:35.425568 1378731 provision.go:177] copyRemoteCerts
	I0831 23:11:35.425653 1378731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 23:11:35.425698 1378731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-777320
	I0831 23:11:35.444738 1378731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/old-k8s-version-777320/id_rsa Username:docker}
	I0831 23:11:35.542737 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 23:11:35.571831 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0831 23:11:35.598406 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0831 23:11:35.624253 1378731 provision.go:87] duration metric: took 454.528814ms to configureAuth
	I0831 23:11:35.624276 1378731 ubuntu.go:193] setting minikube options for container-runtime
	I0831 23:11:35.624479 1378731 config.go:182] Loaded profile config "old-k8s-version-777320": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0831 23:11:35.624486 1378731 machine.go:96] duration metric: took 4.01218069s to provisionDockerMachine
	I0831 23:11:35.624494 1378731 start.go:293] postStartSetup for "old-k8s-version-777320" (driver="docker")
	I0831 23:11:35.624504 1378731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 23:11:35.624553 1378731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 23:11:35.624597 1378731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-777320
	I0831 23:11:35.648926 1378731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/old-k8s-version-777320/id_rsa Username:docker}
	I0831 23:11:35.746752 1378731 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 23:11:35.750657 1378731 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 23:11:35.750690 1378731 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 23:11:35.750700 1378731 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 23:11:35.750707 1378731 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 23:11:35.750718 1378731 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-1161402/.minikube/addons for local assets ...
	I0831 23:11:35.750769 1378731 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-1161402/.minikube/files for local assets ...
	I0831 23:11:35.750854 1378731 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-1161402/.minikube/files/etc/ssl/certs/11667852.pem -> 11667852.pem in /etc/ssl/certs
	I0831 23:11:35.750963 1378731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 23:11:35.760738 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/files/etc/ssl/certs/11667852.pem --> /etc/ssl/certs/11667852.pem (1708 bytes)
	I0831 23:11:35.793051 1378731 start.go:296] duration metric: took 168.540345ms for postStartSetup
	I0831 23:11:35.793215 1378731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 23:11:35.793303 1378731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-777320
	I0831 23:11:35.811349 1378731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/old-k8s-version-777320/id_rsa Username:docker}
	I0831 23:11:35.906901 1378731 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 23:11:35.912796 1378731 fix.go:56] duration metric: took 4.772903089s for fixHost
	I0831 23:11:35.912821 1378731 start.go:83] releasing machines lock for "old-k8s-version-777320", held for 4.772956306s
	I0831 23:11:35.912907 1378731 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "old-k8s-version-777320")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-777320
	I0831 23:11:35.938343 1378731 ssh_runner.go:195] Run: cat /version.json
	I0831 23:11:35.938409 1378731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-777320
	I0831 23:11:35.938733 1378731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 23:11:35.938790 1378731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-777320
	I0831 23:11:35.982756 1378731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/old-k8s-version-777320/id_rsa Username:docker}
	I0831 23:11:35.988142 1378731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/old-k8s-version-777320/id_rsa Username:docker}
	I0831 23:11:36.226444 1378731 ssh_runner.go:195] Run: systemctl --version
	I0831 23:11:36.231055 1378731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 23:11:36.235674 1378731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0831 23:11:36.259871 1378731 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0831 23:11:36.259950 1378731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:11:36.269154 1378731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 23:11:36.269179 1378731 start.go:495] detecting cgroup driver to use...
	I0831 23:11:36.269211 1378731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 23:11:36.269257 1378731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0831 23:11:36.292555 1378731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0831 23:11:36.305250 1378731 docker.go:217] disabling cri-docker service (if available) ...
	I0831 23:11:36.305329 1378731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 23:11:36.319244 1378731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 23:11:36.331943 1378731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 23:11:36.445947 1378731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 23:11:36.557725 1378731 docker.go:233] disabling docker service ...
	I0831 23:11:36.557793 1378731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 23:11:36.572151 1378731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 23:11:36.584464 1378731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 23:11:36.695409 1378731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 23:11:36.804690 1378731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 23:11:36.818299 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 23:11:36.839209 1378731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0831 23:11:36.850012 1378731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0831 23:11:36.860212 1378731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0831 23:11:36.860356 1378731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0831 23:11:36.871528 1378731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 23:11:36.881865 1378731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0831 23:11:36.894304 1378731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 23:11:36.907561 1378731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 23:11:36.917315 1378731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0831 23:11:36.928349 1378731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 23:11:36.938070 1378731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 23:11:36.947174 1378731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:11:37.088412 1378731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0831 23:11:37.260269 1378731 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0831 23:11:37.260400 1378731 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0831 23:11:37.264228 1378731 start.go:563] Will wait 60s for crictl version
	I0831 23:11:37.264347 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:11:37.267832 1378731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 23:11:37.303429 1378731 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.21
	RuntimeApiVersion:  v1
	I0831 23:11:37.303562 1378731 ssh_runner.go:195] Run: containerd --version
	I0831 23:11:37.327983 1378731 ssh_runner.go:195] Run: containerd --version
	I0831 23:11:37.354345 1378731 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.21 ...
	I0831 23:11:37.356454 1378731 cli_runner.go:164] Run: docker network inspect old-k8s-version-777320 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 23:11:37.372147 1378731 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0831 23:11:37.375718 1378731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:11:37.386620 1378731 kubeadm.go:883] updating cluster {Name:old-k8s-version-777320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-777320 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 23:11:37.386745 1378731 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0831 23:11:37.386814 1378731 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 23:11:37.423104 1378731 containerd.go:627] all images are preloaded for containerd runtime.
	I0831 23:11:37.423128 1378731 containerd.go:534] Images already preloaded, skipping extraction
	I0831 23:11:37.423187 1378731 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 23:11:37.461017 1378731 containerd.go:627] all images are preloaded for containerd runtime.
	I0831 23:11:37.461044 1378731 cache_images.go:84] Images are preloaded, skipping loading
	I0831 23:11:37.461053 1378731 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0831 23:11:37.461181 1378731 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-777320 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-777320 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 23:11:37.461254 1378731 ssh_runner.go:195] Run: sudo crictl info
	I0831 23:11:37.509356 1378731 cni.go:84] Creating CNI manager for ""
	I0831 23:11:37.509429 1378731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0831 23:11:37.509456 1378731 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 23:11:37.509510 1378731 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-777320 NodeName:old-k8s-version-777320 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0831 23:11:37.509700 1378731 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-777320"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 23:11:37.509798 1378731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0831 23:11:37.521340 1378731 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 23:11:37.521422 1378731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 23:11:37.536165 1378731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0831 23:11:37.561387 1378731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 23:11:37.592943 1378731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0831 23:11:37.615248 1378731 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0831 23:11:37.618658 1378731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:11:37.633118 1378731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:11:37.750392 1378731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:11:37.769825 1378731 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320 for IP: 192.168.85.2
	I0831 23:11:37.769898 1378731 certs.go:194] generating shared ca certs ...
	I0831 23:11:37.769928 1378731 certs.go:226] acquiring lock for ca certs: {Name:mk34cb0d7c9ce07dfc3fb4f77a59e5e1d853f8c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:11:37.770086 1378731 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.key
	I0831 23:11:37.770166 1378731 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/proxy-client-ca.key
	I0831 23:11:37.770200 1378731 certs.go:256] generating profile certs ...
	I0831 23:11:37.770327 1378731 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.key
	I0831 23:11:37.770434 1378731 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/apiserver.key.ee02ddc2
	I0831 23:11:37.770516 1378731 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/proxy-client.key
	I0831 23:11:37.770664 1378731 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/1166785.pem (1338 bytes)
	W0831 23:11:37.770719 1378731 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/1166785_empty.pem, impossibly tiny 0 bytes
	I0831 23:11:37.770743 1378731 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 23:11:37.770803 1378731 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem (1078 bytes)
	I0831 23:11:37.770856 1378731 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/cert.pem (1123 bytes)
	I0831 23:11:37.770914 1378731 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/key.pem (1679 bytes)
	I0831 23:11:37.770993 1378731 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/files/etc/ssl/certs/11667852.pem (1708 bytes)
	I0831 23:11:37.771662 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 23:11:37.836153 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0831 23:11:37.900528 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 23:11:37.981886 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 23:11:38.041246 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0831 23:11:38.087543 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 23:11:38.116853 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 23:11:38.145800 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 23:11:38.181964 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/files/etc/ssl/certs/11667852.pem --> /usr/share/ca-certificates/11667852.pem (1708 bytes)
	I0831 23:11:38.222719 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 23:11:38.256341 1378731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/1166785.pem --> /usr/share/ca-certificates/1166785.pem (1338 bytes)
	I0831 23:11:38.294188 1378731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 23:11:38.318885 1378731 ssh_runner.go:195] Run: openssl version
	I0831 23:11:38.326215 1378731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11667852.pem && ln -fs /usr/share/ca-certificates/11667852.pem /etc/ssl/certs/11667852.pem"
	I0831 23:11:38.337706 1378731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11667852.pem
	I0831 23:11:38.343688 1378731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:31 /usr/share/ca-certificates/11667852.pem
	I0831 23:11:38.343768 1378731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11667852.pem
	I0831 23:11:38.353492 1378731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11667852.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 23:11:38.362607 1378731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 23:11:38.375290 1378731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:11:38.380224 1378731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:21 /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:11:38.380311 1378731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:11:38.391839 1378731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 23:11:38.406268 1378731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1166785.pem && ln -fs /usr/share/ca-certificates/1166785.pem /etc/ssl/certs/1166785.pem"
	I0831 23:11:38.417605 1378731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1166785.pem
	I0831 23:11:38.422328 1378731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:31 /usr/share/ca-certificates/1166785.pem
	I0831 23:11:38.422413 1378731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1166785.pem
	I0831 23:11:38.431386 1378731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1166785.pem /etc/ssl/certs/51391683.0"
	I0831 23:11:38.445160 1378731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 23:11:38.450441 1378731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 23:11:38.459574 1378731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 23:11:38.471636 1378731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 23:11:38.479390 1378731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 23:11:38.488046 1378731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 23:11:38.496256 1378731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 23:11:38.506248 1378731 kubeadm.go:392] StartCluster: {Name:old-k8s-version-777320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-777320 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:11:38.506358 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0831 23:11:38.506433 1378731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 23:11:38.586317 1378731 cri.go:89] found id: "26aa1d510c36cb526536c021669ae3f5436cdad4f0a6f9c30b4b361249af6077"
	I0831 23:11:38.586359 1378731 cri.go:89] found id: "0ddaf9925b5b8a2498baf923aedda0b04394c82cc0c7c18555b7233ea40ba24c"
	I0831 23:11:38.586364 1378731 cri.go:89] found id: "766da44566779be7d752b980921d12fc4264187a60dc3e7ee972e8abe6ba683f"
	I0831 23:11:38.586367 1378731 cri.go:89] found id: "375dfe16bc1a3f88bb1adae552829f1de1f6ccd7a3b2bf3eb9e823c85daa3329"
	I0831 23:11:38.586370 1378731 cri.go:89] found id: "36189b23eed4dd70cc8796dc64533ece0f97c8515188d9c4bb6817079cf848fa"
	I0831 23:11:38.586375 1378731 cri.go:89] found id: "cb403497717362a835119a12fcd9a98f048e1513652343987c5706732ded954d"
	I0831 23:11:38.586378 1378731 cri.go:89] found id: "4a53de8c7cfef15e1b4d2eb2d08e3992e87a5d45cb56ec1579cce90b650a86a3"
	I0831 23:11:38.586384 1378731 cri.go:89] found id: "24d2daafe86a0dcb6af4171206676787738fec4b49e748c8e217d63f6af8bb12"
	I0831 23:11:38.586387 1378731 cri.go:89] found id: ""
	I0831 23:11:38.586455 1378731 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0831 23:11:38.610888 1378731 cri.go:116] JSON = null
	W0831 23:11:38.610958 1378731 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0831 23:11:38.611044 1378731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 23:11:38.633500 1378731 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0831 23:11:38.633533 1378731 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0831 23:11:38.633594 1378731 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0831 23:11:38.646138 1378731 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0831 23:11:38.646637 1378731 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-777320" does not appear in /home/jenkins/minikube-integration/18943-1161402/kubeconfig
	I0831 23:11:38.646770 1378731 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-1161402/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-777320" cluster setting kubeconfig missing "old-k8s-version-777320" context setting]
	I0831 23:11:38.647111 1378731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/kubeconfig: {Name:mkb68eea79d6c84410a77cb04886486384945560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:11:38.648836 1378731 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0831 23:11:38.662905 1378731 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0831 23:11:38.662942 1378731 kubeadm.go:597] duration metric: took 29.396367ms to restartPrimaryControlPlane
	I0831 23:11:38.662953 1378731 kubeadm.go:394] duration metric: took 156.732898ms to StartCluster
	I0831 23:11:38.662972 1378731 settings.go:142] acquiring lock: {Name:mkccd5b6f7cf87789c72627e47240ed1100ed135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:11:38.663034 1378731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-1161402/kubeconfig
	I0831 23:11:38.663646 1378731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/kubeconfig: {Name:mkb68eea79d6c84410a77cb04886486384945560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:11:38.663849 1378731 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0831 23:11:38.664229 1378731 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0831 23:11:38.664308 1378731 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-777320"
	I0831 23:11:38.664353 1378731 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-777320"
	W0831 23:11:38.664365 1378731 addons.go:243] addon storage-provisioner should already be in state true
	I0831 23:11:38.664388 1378731 host.go:66] Checking if "old-k8s-version-777320" exists ...
	I0831 23:11:38.664869 1378731 cli_runner.go:164] Run: docker container inspect old-k8s-version-777320 --format={{.State.Status}}
	I0831 23:11:38.665243 1378731 config.go:182] Loaded profile config "old-k8s-version-777320": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0831 23:11:38.665313 1378731 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-777320"
	I0831 23:11:38.665354 1378731 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-777320"
	I0831 23:11:38.665617 1378731 cli_runner.go:164] Run: docker container inspect old-k8s-version-777320 --format={{.State.Status}}
	I0831 23:11:38.668756 1378731 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-777320"
	I0831 23:11:38.668844 1378731 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-777320"
	W0831 23:11:38.668867 1378731 addons.go:243] addon metrics-server should already be in state true
	I0831 23:11:38.668929 1378731 host.go:66] Checking if "old-k8s-version-777320" exists ...
	I0831 23:11:38.669112 1378731 addons.go:69] Setting dashboard=true in profile "old-k8s-version-777320"
	I0831 23:11:38.669150 1378731 addons.go:234] Setting addon dashboard=true in "old-k8s-version-777320"
	W0831 23:11:38.669253 1378731 addons.go:243] addon dashboard should already be in state true
	I0831 23:11:38.669287 1378731 host.go:66] Checking if "old-k8s-version-777320" exists ...
	I0831 23:11:38.669438 1378731 out.go:177] * Verifying Kubernetes components...
	I0831 23:11:38.669553 1378731 cli_runner.go:164] Run: docker container inspect old-k8s-version-777320 --format={{.State.Status}}
	I0831 23:11:38.673787 1378731 cli_runner.go:164] Run: docker container inspect old-k8s-version-777320 --format={{.State.Status}}
	I0831 23:11:38.675406 1378731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:11:38.715995 1378731 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-777320"
	W0831 23:11:38.716019 1378731 addons.go:243] addon default-storageclass should already be in state true
	I0831 23:11:38.716045 1378731 host.go:66] Checking if "old-k8s-version-777320" exists ...
	I0831 23:11:38.716464 1378731 cli_runner.go:164] Run: docker container inspect old-k8s-version-777320 --format={{.State.Status}}
	I0831 23:11:38.737855 1378731 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 23:11:38.740793 1378731 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 23:11:38.740817 1378731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 23:11:38.740882 1378731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-777320
	I0831 23:11:38.771363 1378731 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0831 23:11:38.776675 1378731 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0831 23:11:38.776818 1378731 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0831 23:11:38.776833 1378731 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0831 23:11:38.776902 1378731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-777320
	I0831 23:11:38.784671 1378731 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0831 23:11:38.784869 1378731 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 23:11:38.784882 1378731 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 23:11:38.784946 1378731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-777320
	I0831 23:11:38.788770 1378731 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0831 23:11:38.788794 1378731 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0831 23:11:38.788867 1378731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-777320
	I0831 23:11:38.800785 1378731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/old-k8s-version-777320/id_rsa Username:docker}
	I0831 23:11:38.851667 1378731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/old-k8s-version-777320/id_rsa Username:docker}
	I0831 23:11:38.851757 1378731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/old-k8s-version-777320/id_rsa Username:docker}
	I0831 23:11:38.868871 1378731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/old-k8s-version-777320/id_rsa Username:docker}
	I0831 23:11:38.949339 1378731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:11:39.007257 1378731 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-777320" to be "Ready" ...
	I0831 23:11:39.051819 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 23:11:39.133112 1378731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0831 23:11:39.133137 1378731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0831 23:11:39.177172 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 23:11:39.210639 1378731 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0831 23:11:39.210668 1378731 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0831 23:11:39.226607 1378731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0831 23:11:39.226634 1378731 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0831 23:11:39.275882 1378731 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0831 23:11:39.275908 1378731 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0831 23:11:39.344401 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:39.344449 1378731 retry.go:31] will retry after 322.961593ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:39.351106 1378731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 23:11:39.351132 1378731 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0831 23:11:39.376531 1378731 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0831 23:11:39.376561 1378731 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0831 23:11:39.427816 1378731 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0831 23:11:39.427848 1378731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0831 23:11:39.439852 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0831 23:11:39.498149 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:39.498183 1378731 retry.go:31] will retry after 152.557067ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:39.513068 1378731 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0831 23:11:39.513147 1378731 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0831 23:11:39.577235 1378731 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0831 23:11:39.577315 1378731 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0831 23:11:39.638102 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:39.638181 1378731 retry.go:31] will retry after 186.307876ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:39.650717 1378731 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0831 23:11:39.650881 1378731 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0831 23:11:39.650966 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0831 23:11:39.668283 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 23:11:39.692567 1378731 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0831 23:11:39.692675 1378731 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0831 23:11:39.785685 1378731 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0831 23:11:39.785758 1378731 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0831 23:11:39.824979 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 23:11:39.879579 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0831 23:11:39.989193 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:39.989227 1378731 retry.go:31] will retry after 312.731886ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0831 23:11:40.022201 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:40.022238 1378731 retry.go:31] will retry after 332.965344ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0831 23:11:40.143983 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:40.144019 1378731 retry.go:31] will retry after 294.435757ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0831 23:11:40.145098 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:40.145123 1378731 retry.go:31] will retry after 318.056978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:40.302498 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0831 23:11:40.355869 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 23:11:40.439229 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 23:11:40.463628 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0831 23:11:40.541909 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:40.541944 1378731 retry.go:31] will retry after 424.678116ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0831 23:11:40.695200 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:40.695235 1378731 retry.go:31] will retry after 478.420841ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0831 23:11:40.695287 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:40.695299 1378731 retry.go:31] will retry after 449.135693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0831 23:11:40.762382 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:40.762416 1378731 retry.go:31] will retry after 244.069745ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:40.966780 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0831 23:11:41.007068 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0831 23:11:41.008709 1378731 node_ready.go:53] error getting node "old-k8s-version-777320": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-777320": dial tcp 192.168.85.2:8443: connect: connection refused
	I0831 23:11:41.145239 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 23:11:41.174639 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0831 23:11:41.186845 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:41.186883 1378731 retry.go:31] will retry after 1.229507317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0831 23:11:41.235172 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:41.235205 1378731 retry.go:31] will retry after 768.939399ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0831 23:11:41.405216 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:41.405251 1378731 retry.go:31] will retry after 467.238443ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0831 23:11:41.421724 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:41.421759 1378731 retry.go:31] will retry after 942.875982ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:41.872742 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0831 23:11:41.999262 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:41.999296 1378731 retry.go:31] will retry after 1.212792863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:42.004581 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0831 23:11:42.127681 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:42.127723 1378731 retry.go:31] will retry after 967.720059ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:42.365142 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 23:11:42.417455 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0831 23:11:42.552948 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:42.552989 1378731 retry.go:31] will retry after 1.873679093s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0831 23:11:42.584317 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:42.584350 1378731 retry.go:31] will retry after 1.46956855s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:43.095637 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0831 23:11:43.209851 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:43.209884 1378731 retry.go:31] will retry after 1.729502041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:43.213150 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0831 23:11:43.339765 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:43.339794 1378731 retry.go:31] will retry after 2.738322629s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:43.508378 1378731 node_ready.go:53] error getting node "old-k8s-version-777320": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-777320": dial tcp 192.168.85.2:8443: connect: connection refused
	I0831 23:11:44.054326 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0831 23:11:44.187593 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:44.187680 1378731 retry.go:31] will retry after 1.456310282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:44.426892 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0831 23:11:44.526438 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:44.526524 1378731 retry.go:31] will retry after 1.620074156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:44.939688 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0831 23:11:45.054824 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:45.054873 1378731 retry.go:31] will retry after 1.417825109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:45.645038 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0831 23:11:45.737925 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:45.737961 1378731 retry.go:31] will retry after 2.32677157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:46.016244 1378731 node_ready.go:53] error getting node "old-k8s-version-777320": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-777320": dial tcp 192.168.85.2:8443: connect: connection refused
	I0831 23:11:46.078557 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 23:11:46.147036 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0831 23:11:46.180780 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:46.180814 1378731 retry.go:31] will retry after 1.539414359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0831 23:11:46.244968 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:46.245001 1378731 retry.go:31] will retry after 2.277868708s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:46.473466 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0831 23:11:46.603822 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:46.603904 1378731 retry.go:31] will retry after 2.603018679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0831 23:11:47.721370 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 23:11:48.065049 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0831 23:11:48.523996 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 23:11:49.207742 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0831 23:11:58.012308 1378731 node_ready.go:53] error getting node "old-k8s-version-777320": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-777320": net/http: TLS handshake timeout
	I0831 23:11:58.101744 1378731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.380335928s)
	W0831 23:11:58.101797 1378731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0831 23:11:58.101817 1378731 retry.go:31] will retry after 2.67664588s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0831 23:11:58.255715 1378731 node_ready.go:49] node "old-k8s-version-777320" has status "Ready":"True"
	I0831 23:11:58.255748 1378731 node_ready.go:38] duration metric: took 19.248439892s for node "old-k8s-version-777320" to be "Ready" ...
	I0831 23:11:58.255760 1378731 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 23:11:58.415519 1378731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-9zjlw" in "kube-system" namespace to be "Ready" ...
	I0831 23:11:58.454355 1378731 pod_ready.go:93] pod "coredns-74ff55c5b-9zjlw" in "kube-system" namespace has status "Ready":"True"
	I0831 23:11:58.454427 1378731 pod_ready.go:82] duration metric: took 38.869997ms for pod "coredns-74ff55c5b-9zjlw" in "kube-system" namespace to be "Ready" ...
	I0831 23:11:58.454453 1378731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-777320" in "kube-system" namespace to be "Ready" ...
	I0831 23:11:58.634689 1378731 pod_ready.go:93] pod "etcd-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"True"
	I0831 23:11:58.634776 1378731 pod_ready.go:82] duration metric: took 180.301236ms for pod "etcd-old-k8s-version-777320" in "kube-system" namespace to be "Ready" ...
	I0831 23:11:58.634823 1378731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-777320" in "kube-system" namespace to be "Ready" ...
	I0831 23:11:58.654225 1378731 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"True"
	I0831 23:11:58.654323 1378731 pod_ready.go:82] duration metric: took 19.451929ms for pod "kube-apiserver-old-k8s-version-777320" in "kube-system" namespace to be "Ready" ...
	I0831 23:11:58.654366 1378731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-777320" in "kube-system" namespace to be "Ready" ...
	I0831 23:11:58.669157 1378731 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"True"
	I0831 23:11:58.669177 1378731 pod_ready.go:82] duration metric: took 14.765064ms for pod "kube-controller-manager-old-k8s-version-777320" in "kube-system" namespace to be "Ready" ...
	I0831 23:11:58.669188 1378731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wv4m2" in "kube-system" namespace to be "Ready" ...
	I0831 23:11:58.684450 1378731 pod_ready.go:93] pod "kube-proxy-wv4m2" in "kube-system" namespace has status "Ready":"True"
	I0831 23:11:58.684472 1378731 pod_ready.go:82] duration metric: took 15.275798ms for pod "kube-proxy-wv4m2" in "kube-system" namespace to be "Ready" ...
	I0831 23:11:58.684483 1378731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace to be "Ready" ...
	I0831 23:11:59.815820 1378731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.750731556s)
	I0831 23:11:59.816023 1378731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.291999097s)
	I0831 23:12:00.174737 1378731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.966862691s)
	I0831 23:12:00.177439 1378731 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-777320 addons enable metrics-server
	
	I0831 23:12:00.692371 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:00.779393 1378731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 23:12:01.207928 1378731 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-777320"
	I0831 23:12:01.209711 1378731 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, dashboard, metrics-server
	I0831 23:12:01.211763 1378731 addons.go:510] duration metric: took 22.547528426s for enable addons: enabled=[storage-provisioner default-storageclass dashboard metrics-server]
	I0831 23:12:03.190721 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:05.690463 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:07.692026 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:10.191037 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:12.191484 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:14.690025 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:16.690512 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:18.691292 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:21.193555 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:23.692583 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:25.693679 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:28.190891 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:30.191926 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:32.690502 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:35.190820 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:37.191863 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:39.691029 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:42.193412 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:44.694913 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:47.191173 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:49.695833 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:52.191824 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:54.691448 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:57.193952 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:12:59.694321 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:02.191299 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:04.691939 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:07.191322 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:09.193357 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:11.194224 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:13.691715 1378731 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:15.690548 1378731 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace has status "Ready":"True"
	I0831 23:13:15.690572 1378731 pod_ready.go:82] duration metric: took 1m17.006081998s for pod "kube-scheduler-old-k8s-version-777320" in "kube-system" namespace to be "Ready" ...
	I0831 23:13:15.690583 1378731 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace to be "Ready" ...
	I0831 23:13:17.696613 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:19.697907 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:22.198364 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:24.696283 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:26.697096 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:28.697287 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:31.197273 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:33.197608 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:35.696960 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:37.697610 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:39.698516 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:41.701870 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:44.196741 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:46.695826 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:48.702143 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:51.197077 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:53.199769 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:55.696539 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:57.697083 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:13:59.697346 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:01.697834 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:03.698734 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:06.215182 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:08.696475 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:11.196506 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:13.198343 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:15.205635 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:17.697578 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:19.698115 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:22.197020 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:24.198318 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:26.696577 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:29.201121 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:31.203109 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:33.696847 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:35.697726 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:38.196231 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:40.196969 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:42.198287 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:44.709020 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:47.196611 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:49.197118 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:51.696326 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:54.196212 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:56.197892 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:14:58.696210 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:00.697616 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:03.196518 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:05.198404 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:07.696398 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:09.696846 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:11.697346 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:14.197747 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:16.696992 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:18.697969 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:21.196969 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:23.199049 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:25.697828 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:28.196668 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:30.197625 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:32.695961 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:34.696422 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:36.697773 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:39.196366 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:41.696800 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:44.197679 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:46.697301 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:48.697622 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:51.197240 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:53.198307 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:55.696882 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:57.697723 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:15:59.699213 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:02.197612 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:04.695940 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:06.697098 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:09.196553 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:11.196775 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:13.197295 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:15.197621 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:17.199013 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:19.696565 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:22.196858 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:24.197025 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:26.197316 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:28.697307 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:30.798722 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:33.199301 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:35.697709 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:38.196839 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:40.197305 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:42.206822 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:44.696384 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:46.696596 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:48.697473 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:51.196978 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:53.697754 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:56.196597 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:16:58.202125 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:17:00.277460 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:17:02.698322 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:17:05.197916 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:17:07.199172 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:17:09.696783 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:17:11.696974 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:17:13.697262 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:17:15.697766 1378731 pod_ready.go:103] pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace has status "Ready":"False"
	I0831 23:17:15.697802 1378731 pod_ready.go:82] duration metric: took 4m0.007204981s for pod "metrics-server-9975d5f86-dl7gj" in "kube-system" namespace to be "Ready" ...
	E0831 23:17:15.697813 1378731 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0831 23:17:15.697821 1378731 pod_ready.go:39] duration metric: took 5m17.442049954s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 23:17:15.697835 1378731 api_server.go:52] waiting for apiserver process to appear ...
	I0831 23:17:15.697872 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0831 23:17:15.697943 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 23:17:15.736011 1378731 cri.go:89] found id: "2b314ddb4c1637e9a96260a3016921d4c647744eac2a23a86ba97ac80539955e"
	I0831 23:17:15.736035 1378731 cri.go:89] found id: "24d2daafe86a0dcb6af4171206676787738fec4b49e748c8e217d63f6af8bb12"
	I0831 23:17:15.736041 1378731 cri.go:89] found id: ""
	I0831 23:17:15.736048 1378731 logs.go:276] 2 containers: [2b314ddb4c1637e9a96260a3016921d4c647744eac2a23a86ba97ac80539955e 24d2daafe86a0dcb6af4171206676787738fec4b49e748c8e217d63f6af8bb12]
	I0831 23:17:15.736105 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:15.739722 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:15.743067 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0831 23:17:15.743144 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 23:17:15.783638 1378731 cri.go:89] found id: "adc270d3e8398b7a86ff787dfd6fa155a7deeb47c29d94cf0371e7f3af2cf66a"
	I0831 23:17:15.783667 1378731 cri.go:89] found id: "36189b23eed4dd70cc8796dc64533ece0f97c8515188d9c4bb6817079cf848fa"
	I0831 23:17:15.783672 1378731 cri.go:89] found id: ""
	I0831 23:17:15.783679 1378731 logs.go:276] 2 containers: [adc270d3e8398b7a86ff787dfd6fa155a7deeb47c29d94cf0371e7f3af2cf66a 36189b23eed4dd70cc8796dc64533ece0f97c8515188d9c4bb6817079cf848fa]
	I0831 23:17:15.783736 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:15.787379 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:15.790990 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0831 23:17:15.791075 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 23:17:15.838120 1378731 cri.go:89] found id: "83c393cb9b979e9591d3e9004c20ad7a85c3cf5a2fb01002fa02cdd21598c0ee"
	I0831 23:17:15.838145 1378731 cri.go:89] found id: "26aa1d510c36cb526536c021669ae3f5436cdad4f0a6f9c30b4b361249af6077"
	I0831 23:17:15.838151 1378731 cri.go:89] found id: ""
	I0831 23:17:15.838159 1378731 logs.go:276] 2 containers: [83c393cb9b979e9591d3e9004c20ad7a85c3cf5a2fb01002fa02cdd21598c0ee 26aa1d510c36cb526536c021669ae3f5436cdad4f0a6f9c30b4b361249af6077]
	I0831 23:17:15.838218 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:15.842023 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:15.845511 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0831 23:17:15.845635 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 23:17:15.890643 1378731 cri.go:89] found id: "f3564f974c7c186a4cf3110fbdf83d7607a54cbd7d58484326748df57213e666"
	I0831 23:17:15.890664 1378731 cri.go:89] found id: "cb403497717362a835119a12fcd9a98f048e1513652343987c5706732ded954d"
	I0831 23:17:15.890669 1378731 cri.go:89] found id: ""
	I0831 23:17:15.890676 1378731 logs.go:276] 2 containers: [f3564f974c7c186a4cf3110fbdf83d7607a54cbd7d58484326748df57213e666 cb403497717362a835119a12fcd9a98f048e1513652343987c5706732ded954d]
	I0831 23:17:15.890762 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:15.894411 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:15.897666 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0831 23:17:15.897743 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 23:17:15.943706 1378731 cri.go:89] found id: "a630dbfa8aa905e5ea3326c49649056da78bba5cfd2beda22ff0f2f93515a197"
	I0831 23:17:15.943730 1378731 cri.go:89] found id: "375dfe16bc1a3f88bb1adae552829f1de1f6ccd7a3b2bf3eb9e823c85daa3329"
	I0831 23:17:15.943735 1378731 cri.go:89] found id: ""
	I0831 23:17:15.943766 1378731 logs.go:276] 2 containers: [a630dbfa8aa905e5ea3326c49649056da78bba5cfd2beda22ff0f2f93515a197 375dfe16bc1a3f88bb1adae552829f1de1f6ccd7a3b2bf3eb9e823c85daa3329]
	I0831 23:17:15.943842 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:15.947487 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:15.951394 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 23:17:15.951525 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 23:17:15.998464 1378731 cri.go:89] found id: "2ba9272459fbd3d5920c42ffd67f3fc7be523ddb8abd1b3e2f8db38f6db5a2bd"
	I0831 23:17:15.998489 1378731 cri.go:89] found id: "4a53de8c7cfef15e1b4d2eb2d08e3992e87a5d45cb56ec1579cce90b650a86a3"
	I0831 23:17:15.998496 1378731 cri.go:89] found id: ""
	I0831 23:17:15.998504 1378731 logs.go:276] 2 containers: [2ba9272459fbd3d5920c42ffd67f3fc7be523ddb8abd1b3e2f8db38f6db5a2bd 4a53de8c7cfef15e1b4d2eb2d08e3992e87a5d45cb56ec1579cce90b650a86a3]
	I0831 23:17:15.998594 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:16.004283 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:16.009432 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0831 23:17:16.009593 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 23:17:16.054331 1378731 cri.go:89] found id: "e7bba7657fd955c1b7ffaed6f8954f4add6c68cbbebb5450b878b68fecc3dfd4"
	I0831 23:17:16.054397 1378731 cri.go:89] found id: "0ddaf9925b5b8a2498baf923aedda0b04394c82cc0c7c18555b7233ea40ba24c"
	I0831 23:17:16.054417 1378731 cri.go:89] found id: ""
	I0831 23:17:16.054432 1378731 logs.go:276] 2 containers: [e7bba7657fd955c1b7ffaed6f8954f4add6c68cbbebb5450b878b68fecc3dfd4 0ddaf9925b5b8a2498baf923aedda0b04394c82cc0c7c18555b7233ea40ba24c]
	I0831 23:17:16.054506 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:16.058325 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:16.062125 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0831 23:17:16.062198 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0831 23:17:16.110244 1378731 cri.go:89] found id: "3a128b2ebef5226632792617aecdd7d9fa214ff983a449971d7ccdfab3a99f21"
	I0831 23:17:16.110268 1378731 cri.go:89] found id: ""
	I0831 23:17:16.110277 1378731 logs.go:276] 1 containers: [3a128b2ebef5226632792617aecdd7d9fa214ff983a449971d7ccdfab3a99f21]
	I0831 23:17:16.110349 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:16.114110 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0831 23:17:16.114215 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0831 23:17:16.154992 1378731 cri.go:89] found id: "f7e4d956562c970c31e64420c5803c72175a04dd6fcef52066a3ece0a6233f9f"
	I0831 23:17:16.155016 1378731 cri.go:89] found id: "a7e7bee1e72395c1a5b201c2a16ae1e8c0725a75e13db42a8afd0bd7b61f1a6b"
	I0831 23:17:16.155021 1378731 cri.go:89] found id: ""
	I0831 23:17:16.155028 1378731 logs.go:276] 2 containers: [f7e4d956562c970c31e64420c5803c72175a04dd6fcef52066a3ece0a6233f9f a7e7bee1e72395c1a5b201c2a16ae1e8c0725a75e13db42a8afd0bd7b61f1a6b]
	I0831 23:17:16.155088 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:16.159471 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:16.163212 1378731 logs.go:123] Gathering logs for describe nodes ...
	I0831 23:17:16.163239 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 23:17:16.311122 1378731 logs.go:123] Gathering logs for kube-controller-manager [4a53de8c7cfef15e1b4d2eb2d08e3992e87a5d45cb56ec1579cce90b650a86a3] ...
	I0831 23:17:16.311158 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a53de8c7cfef15e1b4d2eb2d08e3992e87a5d45cb56ec1579cce90b650a86a3"
	I0831 23:17:16.395216 1378731 logs.go:123] Gathering logs for storage-provisioner [a7e7bee1e72395c1a5b201c2a16ae1e8c0725a75e13db42a8afd0bd7b61f1a6b] ...
	I0831 23:17:16.395252 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7e7bee1e72395c1a5b201c2a16ae1e8c0725a75e13db42a8afd0bd7b61f1a6b"
	I0831 23:17:16.433807 1378731 logs.go:123] Gathering logs for container status ...
	I0831 23:17:16.433836 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 23:17:16.499825 1378731 logs.go:123] Gathering logs for kubelet ...
	I0831 23:17:16.499852 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 23:17:16.559083 1378731 logs.go:138] Found kubelet problem: Aug 31 23:11:59 old-k8s-version-777320 kubelet[661]: E0831 23:11:59.886301     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:16.559507 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:00 old-k8s-version-777320 kubelet[661]: E0831 23:12:00.799346     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.563102 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:16 old-k8s-version-777320 kubelet[661]: E0831 23:12:16.573045     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:16.565584 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:28 old-k8s-version-777320 kubelet[661]: E0831 23:12:28.934864     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.565923 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:29 old-k8s-version-777320 kubelet[661]: E0831 23:12:29.938937     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.566118 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:30 old-k8s-version-777320 kubelet[661]: E0831 23:12:30.563156     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.566566 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:30 old-k8s-version-777320 kubelet[661]: E0831 23:12:30.944101     661 pod_workers.go:191] Error syncing pod a63ab31c-0052-473f-8538-7ccd4026e42f ("storage-provisioner_kube-system(a63ab31c-0052-473f-8538-7ccd4026e42f)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a63ab31c-0052-473f-8538-7ccd4026e42f)"
	W0831 23:17:16.566900 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:35 old-k8s-version-777320 kubelet[661]: E0831 23:12:35.100827     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.569739 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:41 old-k8s-version-777320 kubelet[661]: E0831 23:12:41.565180     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:16.570343 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:50 old-k8s-version-777320 kubelet[661]: E0831 23:12:50.005945     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.570665 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:53 old-k8s-version-777320 kubelet[661]: E0831 23:12:53.558398     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.570996 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:55 old-k8s-version-777320 kubelet[661]: E0831 23:12:55.108037     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.571188 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:06 old-k8s-version-777320 kubelet[661]: E0831 23:13:06.558380     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.571528 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:07 old-k8s-version-777320 kubelet[661]: E0831 23:13:07.557412     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.571853 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:19 old-k8s-version-777320 kubelet[661]: E0831 23:13:19.558459     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.572325 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:20 old-k8s-version-777320 kubelet[661]: E0831 23:13:20.098381     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.572674 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:25 old-k8s-version-777320 kubelet[661]: E0831 23:13:25.100789     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.575241 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:31 old-k8s-version-777320 kubelet[661]: E0831 23:13:31.566325     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:16.575584 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:39 old-k8s-version-777320 kubelet[661]: E0831 23:13:39.557527     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.575782 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:46 old-k8s-version-777320 kubelet[661]: E0831 23:13:46.576899     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.576125 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:51 old-k8s-version-777320 kubelet[661]: E0831 23:13:51.557404     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.576320 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:59 old-k8s-version-777320 kubelet[661]: E0831 23:13:59.557798     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.576945 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:06 old-k8s-version-777320 kubelet[661]: E0831 23:14:06.253495     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.577135 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:12 old-k8s-version-777320 kubelet[661]: E0831 23:14:12.562241     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.577478 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:15 old-k8s-version-777320 kubelet[661]: E0831 23:14:15.101214     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.577671 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:23 old-k8s-version-777320 kubelet[661]: E0831 23:14:23.557778     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.578016 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:29 old-k8s-version-777320 kubelet[661]: E0831 23:14:29.557518     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.578210 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:36 old-k8s-version-777320 kubelet[661]: E0831 23:14:36.558367     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.578552 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:44 old-k8s-version-777320 kubelet[661]: E0831 23:14:44.558217     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.578756 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:47 old-k8s-version-777320 kubelet[661]: E0831 23:14:47.557763     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.579101 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:58 old-k8s-version-777320 kubelet[661]: E0831 23:14:58.571753     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.581643 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:02 old-k8s-version-777320 kubelet[661]: E0831 23:15:02.567140     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:16.581981 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:13 old-k8s-version-777320 kubelet[661]: E0831 23:15:13.557443     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.582174 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:15 old-k8s-version-777320 kubelet[661]: E0831 23:15:15.557843     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.582497 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:27 old-k8s-version-777320 kubelet[661]: E0831 23:15:27.558332     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.582963 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:28 old-k8s-version-777320 kubelet[661]: E0831 23:15:28.485753     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.583303 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:35 old-k8s-version-777320 kubelet[661]: E0831 23:15:35.101354     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.583497 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:41 old-k8s-version-777320 kubelet[661]: E0831 23:15:41.558221     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.583827 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:46 old-k8s-version-777320 kubelet[661]: E0831 23:15:46.558727     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.584021 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:52 old-k8s-version-777320 kubelet[661]: E0831 23:15:52.560966     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.584359 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:00 old-k8s-version-777320 kubelet[661]: E0831 23:16:00.559529     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.584545 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:04 old-k8s-version-777320 kubelet[661]: E0831 23:16:04.557871     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.584888 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:12 old-k8s-version-777320 kubelet[661]: E0831 23:16:12.558116     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.585076 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:17 old-k8s-version-777320 kubelet[661]: E0831 23:16:17.558685     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.585409 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:26 old-k8s-version-777320 kubelet[661]: E0831 23:16:26.559804     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.585606 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:30 old-k8s-version-777320 kubelet[661]: E0831 23:16:30.558289     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.585936 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:37 old-k8s-version-777320 kubelet[661]: E0831 23:16:37.557945     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.586126 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:45 old-k8s-version-777320 kubelet[661]: E0831 23:16:45.557768     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.586455 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:50 old-k8s-version-777320 kubelet[661]: E0831 23:16:50.558846     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.586641 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:58 old-k8s-version-777320 kubelet[661]: E0831 23:16:58.561819     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:16.586974 1378731 logs.go:138] Found kubelet problem: Aug 31 23:17:02 old-k8s-version-777320 kubelet[661]: E0831 23:17:02.558100     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:16.587173 1378731 logs.go:138] Found kubelet problem: Aug 31 23:17:09 old-k8s-version-777320 kubelet[661]: E0831 23:17:09.557874     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0831 23:17:16.587183 1378731 logs.go:123] Gathering logs for etcd [36189b23eed4dd70cc8796dc64533ece0f97c8515188d9c4bb6817079cf848fa] ...
	I0831 23:17:16.587198 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36189b23eed4dd70cc8796dc64533ece0f97c8515188d9c4bb6817079cf848fa"
	I0831 23:17:16.638160 1378731 logs.go:123] Gathering logs for kube-scheduler [f3564f974c7c186a4cf3110fbdf83d7607a54cbd7d58484326748df57213e666] ...
	I0831 23:17:16.638196 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3564f974c7c186a4cf3110fbdf83d7607a54cbd7d58484326748df57213e666"
	I0831 23:17:16.678811 1378731 logs.go:123] Gathering logs for kube-scheduler [cb403497717362a835119a12fcd9a98f048e1513652343987c5706732ded954d] ...
	I0831 23:17:16.678841 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb403497717362a835119a12fcd9a98f048e1513652343987c5706732ded954d"
	I0831 23:17:16.723142 1378731 logs.go:123] Gathering logs for kube-proxy [375dfe16bc1a3f88bb1adae552829f1de1f6ccd7a3b2bf3eb9e823c85daa3329] ...
	I0831 23:17:16.723175 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 375dfe16bc1a3f88bb1adae552829f1de1f6ccd7a3b2bf3eb9e823c85daa3329"
	I0831 23:17:16.781476 1378731 logs.go:123] Gathering logs for kindnet [0ddaf9925b5b8a2498baf923aedda0b04394c82cc0c7c18555b7233ea40ba24c] ...
	I0831 23:17:16.781513 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ddaf9925b5b8a2498baf923aedda0b04394c82cc0c7c18555b7233ea40ba24c"
	I0831 23:17:16.827016 1378731 logs.go:123] Gathering logs for containerd ...
	I0831 23:17:16.827045 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0831 23:17:16.892742 1378731 logs.go:123] Gathering logs for kube-apiserver [2b314ddb4c1637e9a96260a3016921d4c647744eac2a23a86ba97ac80539955e] ...
	I0831 23:17:16.892821 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b314ddb4c1637e9a96260a3016921d4c647744eac2a23a86ba97ac80539955e"
	I0831 23:17:16.973231 1378731 logs.go:123] Gathering logs for kube-apiserver [24d2daafe86a0dcb6af4171206676787738fec4b49e748c8e217d63f6af8bb12] ...
	I0831 23:17:16.973266 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24d2daafe86a0dcb6af4171206676787738fec4b49e748c8e217d63f6af8bb12"
	I0831 23:17:17.073665 1378731 logs.go:123] Gathering logs for etcd [adc270d3e8398b7a86ff787dfd6fa155a7deeb47c29d94cf0371e7f3af2cf66a] ...
	I0831 23:17:17.073702 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adc270d3e8398b7a86ff787dfd6fa155a7deeb47c29d94cf0371e7f3af2cf66a"
	I0831 23:17:17.123022 1378731 logs.go:123] Gathering logs for coredns [83c393cb9b979e9591d3e9004c20ad7a85c3cf5a2fb01002fa02cdd21598c0ee] ...
	I0831 23:17:17.123055 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83c393cb9b979e9591d3e9004c20ad7a85c3cf5a2fb01002fa02cdd21598c0ee"
	I0831 23:17:17.161757 1378731 logs.go:123] Gathering logs for coredns [26aa1d510c36cb526536c021669ae3f5436cdad4f0a6f9c30b4b361249af6077] ...
	I0831 23:17:17.161789 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26aa1d510c36cb526536c021669ae3f5436cdad4f0a6f9c30b4b361249af6077"
	I0831 23:17:17.207656 1378731 logs.go:123] Gathering logs for kube-proxy [a630dbfa8aa905e5ea3326c49649056da78bba5cfd2beda22ff0f2f93515a197] ...
	I0831 23:17:17.207688 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a630dbfa8aa905e5ea3326c49649056da78bba5cfd2beda22ff0f2f93515a197"
	I0831 23:17:17.253344 1378731 logs.go:123] Gathering logs for kindnet [e7bba7657fd955c1b7ffaed6f8954f4add6c68cbbebb5450b878b68fecc3dfd4] ...
	I0831 23:17:17.253373 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bba7657fd955c1b7ffaed6f8954f4add6c68cbbebb5450b878b68fecc3dfd4"
	I0831 23:17:17.305840 1378731 logs.go:123] Gathering logs for kubernetes-dashboard [3a128b2ebef5226632792617aecdd7d9fa214ff983a449971d7ccdfab3a99f21] ...
	I0831 23:17:17.305872 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a128b2ebef5226632792617aecdd7d9fa214ff983a449971d7ccdfab3a99f21"
	I0831 23:17:17.350496 1378731 logs.go:123] Gathering logs for dmesg ...
	I0831 23:17:17.350531 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 23:17:17.367849 1378731 logs.go:123] Gathering logs for kube-controller-manager [2ba9272459fbd3d5920c42ffd67f3fc7be523ddb8abd1b3e2f8db38f6db5a2bd] ...
	I0831 23:17:17.367879 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9272459fbd3d5920c42ffd67f3fc7be523ddb8abd1b3e2f8db38f6db5a2bd"
	I0831 23:17:17.430837 1378731 logs.go:123] Gathering logs for storage-provisioner [f7e4d956562c970c31e64420c5803c72175a04dd6fcef52066a3ece0a6233f9f] ...
	I0831 23:17:17.430872 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e4d956562c970c31e64420c5803c72175a04dd6fcef52066a3ece0a6233f9f"
	I0831 23:17:17.476393 1378731 out.go:358] Setting ErrFile to fd 2...
	I0831 23:17:17.476424 1378731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 23:17:17.476507 1378731 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0831 23:17:17.476522 1378731 out.go:270]   Aug 31 23:16:45 old-k8s-version-777320 kubelet[661]: E0831 23:16:45.557768     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 31 23:16:45 old-k8s-version-777320 kubelet[661]: E0831 23:16:45.557768     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:17.476537 1378731 out.go:270]   Aug 31 23:16:50 old-k8s-version-777320 kubelet[661]: E0831 23:16:50.558846     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	  Aug 31 23:16:50 old-k8s-version-777320 kubelet[661]: E0831 23:16:50.558846     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:17.476565 1378731 out.go:270]   Aug 31 23:16:58 old-k8s-version-777320 kubelet[661]: E0831 23:16:58.561819     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 31 23:16:58 old-k8s-version-777320 kubelet[661]: E0831 23:16:58.561819     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:17.476585 1378731 out.go:270]   Aug 31 23:17:02 old-k8s-version-777320 kubelet[661]: E0831 23:17:02.558100     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	  Aug 31 23:17:02 old-k8s-version-777320 kubelet[661]: E0831 23:17:02.558100     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:17.476599 1378731 out.go:270]   Aug 31 23:17:09 old-k8s-version-777320 kubelet[661]: E0831 23:17:09.557874     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 31 23:17:09 old-k8s-version-777320 kubelet[661]: E0831 23:17:09.557874     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0831 23:17:17.476670 1378731 out.go:358] Setting ErrFile to fd 2...
	I0831 23:17:17.476689 1378731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:17:27.478286 1378731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 23:17:27.490816 1378731 api_server.go:72] duration metric: took 5m48.826929616s to wait for apiserver process to appear ...
	I0831 23:17:27.490838 1378731 api_server.go:88] waiting for apiserver healthz status ...
	I0831 23:17:27.490882 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0831 23:17:27.490938 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 23:17:27.569029 1378731 cri.go:89] found id: "2b314ddb4c1637e9a96260a3016921d4c647744eac2a23a86ba97ac80539955e"
	I0831 23:17:27.569048 1378731 cri.go:89] found id: "24d2daafe86a0dcb6af4171206676787738fec4b49e748c8e217d63f6af8bb12"
	I0831 23:17:27.569053 1378731 cri.go:89] found id: ""
	I0831 23:17:27.569060 1378731 logs.go:276] 2 containers: [2b314ddb4c1637e9a96260a3016921d4c647744eac2a23a86ba97ac80539955e 24d2daafe86a0dcb6af4171206676787738fec4b49e748c8e217d63f6af8bb12]
	I0831 23:17:27.569113 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.573327 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.577199 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0831 23:17:27.577262 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 23:17:27.626986 1378731 cri.go:89] found id: "adc270d3e8398b7a86ff787dfd6fa155a7deeb47c29d94cf0371e7f3af2cf66a"
	I0831 23:17:27.627004 1378731 cri.go:89] found id: "36189b23eed4dd70cc8796dc64533ece0f97c8515188d9c4bb6817079cf848fa"
	I0831 23:17:27.627009 1378731 cri.go:89] found id: ""
	I0831 23:17:27.627017 1378731 logs.go:276] 2 containers: [adc270d3e8398b7a86ff787dfd6fa155a7deeb47c29d94cf0371e7f3af2cf66a 36189b23eed4dd70cc8796dc64533ece0f97c8515188d9c4bb6817079cf848fa]
	I0831 23:17:27.627071 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.631361 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.635643 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0831 23:17:27.635706 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 23:17:27.683136 1378731 cri.go:89] found id: "83c393cb9b979e9591d3e9004c20ad7a85c3cf5a2fb01002fa02cdd21598c0ee"
	I0831 23:17:27.683156 1378731 cri.go:89] found id: "26aa1d510c36cb526536c021669ae3f5436cdad4f0a6f9c30b4b361249af6077"
	I0831 23:17:27.683160 1378731 cri.go:89] found id: ""
	I0831 23:17:27.683167 1378731 logs.go:276] 2 containers: [83c393cb9b979e9591d3e9004c20ad7a85c3cf5a2fb01002fa02cdd21598c0ee 26aa1d510c36cb526536c021669ae3f5436cdad4f0a6f9c30b4b361249af6077]
	I0831 23:17:27.683221 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.687219 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.692442 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0831 23:17:27.692509 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 23:17:27.789373 1378731 cri.go:89] found id: "f3564f974c7c186a4cf3110fbdf83d7607a54cbd7d58484326748df57213e666"
	I0831 23:17:27.789445 1378731 cri.go:89] found id: "cb403497717362a835119a12fcd9a98f048e1513652343987c5706732ded954d"
	I0831 23:17:27.789465 1378731 cri.go:89] found id: ""
	I0831 23:17:27.789493 1378731 logs.go:276] 2 containers: [f3564f974c7c186a4cf3110fbdf83d7607a54cbd7d58484326748df57213e666 cb403497717362a835119a12fcd9a98f048e1513652343987c5706732ded954d]
	I0831 23:17:27.789562 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.799289 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.803458 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0831 23:17:27.803580 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 23:17:27.870602 1378731 cri.go:89] found id: "a630dbfa8aa905e5ea3326c49649056da78bba5cfd2beda22ff0f2f93515a197"
	I0831 23:17:27.870624 1378731 cri.go:89] found id: "375dfe16bc1a3f88bb1adae552829f1de1f6ccd7a3b2bf3eb9e823c85daa3329"
	I0831 23:17:27.870629 1378731 cri.go:89] found id: ""
	I0831 23:17:27.870636 1378731 logs.go:276] 2 containers: [a630dbfa8aa905e5ea3326c49649056da78bba5cfd2beda22ff0f2f93515a197 375dfe16bc1a3f88bb1adae552829f1de1f6ccd7a3b2bf3eb9e823c85daa3329]
	I0831 23:17:27.870694 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.874793 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.878737 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 23:17:27.878804 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 23:17:27.960687 1378731 cri.go:89] found id: "2ba9272459fbd3d5920c42ffd67f3fc7be523ddb8abd1b3e2f8db38f6db5a2bd"
	I0831 23:17:27.960708 1378731 cri.go:89] found id: "4a53de8c7cfef15e1b4d2eb2d08e3992e87a5d45cb56ec1579cce90b650a86a3"
	I0831 23:17:27.960713 1378731 cri.go:89] found id: ""
	I0831 23:17:27.960720 1378731 logs.go:276] 2 containers: [2ba9272459fbd3d5920c42ffd67f3fc7be523ddb8abd1b3e2f8db38f6db5a2bd 4a53de8c7cfef15e1b4d2eb2d08e3992e87a5d45cb56ec1579cce90b650a86a3]
	I0831 23:17:27.960779 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.964573 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.969609 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0831 23:17:27.969674 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 23:17:28.029658 1378731 cri.go:89] found id: "e7bba7657fd955c1b7ffaed6f8954f4add6c68cbbebb5450b878b68fecc3dfd4"
	I0831 23:17:28.029682 1378731 cri.go:89] found id: "0ddaf9925b5b8a2498baf923aedda0b04394c82cc0c7c18555b7233ea40ba24c"
	I0831 23:17:28.029689 1378731 cri.go:89] found id: ""
	I0831 23:17:28.029696 1378731 logs.go:276] 2 containers: [e7bba7657fd955c1b7ffaed6f8954f4add6c68cbbebb5450b878b68fecc3dfd4 0ddaf9925b5b8a2498baf923aedda0b04394c82cc0c7c18555b7233ea40ba24c]
	I0831 23:17:28.029760 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:28.034482 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:28.039042 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0831 23:17:28.039128 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0831 23:17:28.089883 1378731 cri.go:89] found id: "3a128b2ebef5226632792617aecdd7d9fa214ff983a449971d7ccdfab3a99f21"
	I0831 23:17:28.089909 1378731 cri.go:89] found id: ""
	I0831 23:17:28.089923 1378731 logs.go:276] 1 containers: [3a128b2ebef5226632792617aecdd7d9fa214ff983a449971d7ccdfab3a99f21]
	I0831 23:17:28.090037 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:28.094024 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0831 23:17:28.094139 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0831 23:17:28.148204 1378731 cri.go:89] found id: "f7e4d956562c970c31e64420c5803c72175a04dd6fcef52066a3ece0a6233f9f"
	I0831 23:17:28.148263 1378731 cri.go:89] found id: "a7e7bee1e72395c1a5b201c2a16ae1e8c0725a75e13db42a8afd0bd7b61f1a6b"
	I0831 23:17:28.148292 1378731 cri.go:89] found id: ""
	I0831 23:17:28.148313 1378731 logs.go:276] 2 containers: [f7e4d956562c970c31e64420c5803c72175a04dd6fcef52066a3ece0a6233f9f a7e7bee1e72395c1a5b201c2a16ae1e8c0725a75e13db42a8afd0bd7b61f1a6b]
	I0831 23:17:28.148397 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:28.153035 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:28.157041 1378731 logs.go:123] Gathering logs for etcd [36189b23eed4dd70cc8796dc64533ece0f97c8515188d9c4bb6817079cf848fa] ...
	I0831 23:17:28.157118 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36189b23eed4dd70cc8796dc64533ece0f97c8515188d9c4bb6817079cf848fa"
	I0831 23:17:28.219799 1378731 logs.go:123] Gathering logs for containerd ...
	I0831 23:17:28.219874 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0831 23:17:28.289711 1378731 logs.go:123] Gathering logs for etcd [adc270d3e8398b7a86ff787dfd6fa155a7deeb47c29d94cf0371e7f3af2cf66a] ...
	I0831 23:17:28.289790 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adc270d3e8398b7a86ff787dfd6fa155a7deeb47c29d94cf0371e7f3af2cf66a"
	I0831 23:17:28.344442 1378731 logs.go:123] Gathering logs for kube-controller-manager [4a53de8c7cfef15e1b4d2eb2d08e3992e87a5d45cb56ec1579cce90b650a86a3] ...
	I0831 23:17:28.344473 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a53de8c7cfef15e1b4d2eb2d08e3992e87a5d45cb56ec1579cce90b650a86a3"
	I0831 23:17:28.417370 1378731 logs.go:123] Gathering logs for kubernetes-dashboard [3a128b2ebef5226632792617aecdd7d9fa214ff983a449971d7ccdfab3a99f21] ...
	I0831 23:17:28.417462 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a128b2ebef5226632792617aecdd7d9fa214ff983a449971d7ccdfab3a99f21"
	I0831 23:17:28.475114 1378731 logs.go:123] Gathering logs for container status ...
	I0831 23:17:28.475141 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 23:17:28.555107 1378731 logs.go:123] Gathering logs for kube-apiserver [2b314ddb4c1637e9a96260a3016921d4c647744eac2a23a86ba97ac80539955e] ...
	I0831 23:17:28.555136 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b314ddb4c1637e9a96260a3016921d4c647744eac2a23a86ba97ac80539955e"
	I0831 23:17:28.652948 1378731 logs.go:123] Gathering logs for coredns [26aa1d510c36cb526536c021669ae3f5436cdad4f0a6f9c30b4b361249af6077] ...
	I0831 23:17:28.653044 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26aa1d510c36cb526536c021669ae3f5436cdad4f0a6f9c30b4b361249af6077"
	I0831 23:17:28.718275 1378731 logs.go:123] Gathering logs for kube-scheduler [cb403497717362a835119a12fcd9a98f048e1513652343987c5706732ded954d] ...
	I0831 23:17:28.718299 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb403497717362a835119a12fcd9a98f048e1513652343987c5706732ded954d"
	I0831 23:17:28.776545 1378731 logs.go:123] Gathering logs for kube-proxy [a630dbfa8aa905e5ea3326c49649056da78bba5cfd2beda22ff0f2f93515a197] ...
	I0831 23:17:28.776572 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a630dbfa8aa905e5ea3326c49649056da78bba5cfd2beda22ff0f2f93515a197"
	I0831 23:17:28.830806 1378731 logs.go:123] Gathering logs for kube-proxy [375dfe16bc1a3f88bb1adae552829f1de1f6ccd7a3b2bf3eb9e823c85daa3329] ...
	I0831 23:17:28.830831 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 375dfe16bc1a3f88bb1adae552829f1de1f6ccd7a3b2bf3eb9e823c85daa3329"
	I0831 23:17:28.953408 1378731 logs.go:123] Gathering logs for kube-controller-manager [2ba9272459fbd3d5920c42ffd67f3fc7be523ddb8abd1b3e2f8db38f6db5a2bd] ...
	I0831 23:17:28.953438 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9272459fbd3d5920c42ffd67f3fc7be523ddb8abd1b3e2f8db38f6db5a2bd"
	I0831 23:17:29.052029 1378731 logs.go:123] Gathering logs for kindnet [e7bba7657fd955c1b7ffaed6f8954f4add6c68cbbebb5450b878b68fecc3dfd4] ...
	I0831 23:17:29.052112 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bba7657fd955c1b7ffaed6f8954f4add6c68cbbebb5450b878b68fecc3dfd4"
	I0831 23:17:29.132067 1378731 logs.go:123] Gathering logs for kubelet ...
	I0831 23:17:29.132142 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 23:17:29.205613 1378731 logs.go:138] Found kubelet problem: Aug 31 23:11:59 old-k8s-version-777320 kubelet[661]: E0831 23:11:59.886301     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:29.206125 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:00 old-k8s-version-777320 kubelet[661]: E0831 23:12:00.799346     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.209084 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:16 old-k8s-version-777320 kubelet[661]: E0831 23:12:16.573045     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:29.211702 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:28 old-k8s-version-777320 kubelet[661]: E0831 23:12:28.934864     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.212063 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:29 old-k8s-version-777320 kubelet[661]: E0831 23:12:29.938937     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.212274 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:30 old-k8s-version-777320 kubelet[661]: E0831 23:12:30.563156     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.212793 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:30 old-k8s-version-777320 kubelet[661]: E0831 23:12:30.944101     661 pod_workers.go:191] Error syncing pod a63ab31c-0052-473f-8538-7ccd4026e42f ("storage-provisioner_kube-system(a63ab31c-0052-473f-8538-7ccd4026e42f)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a63ab31c-0052-473f-8538-7ccd4026e42f)"
	W0831 23:17:29.213242 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:35 old-k8s-version-777320 kubelet[661]: E0831 23:12:35.100827     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.216103 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:41 old-k8s-version-777320 kubelet[661]: E0831 23:12:41.565180     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:29.216741 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:50 old-k8s-version-777320 kubelet[661]: E0831 23:12:50.005945     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.217085 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:53 old-k8s-version-777320 kubelet[661]: E0831 23:12:53.558398     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.217442 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:55 old-k8s-version-777320 kubelet[661]: E0831 23:12:55.108037     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.217651 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:06 old-k8s-version-777320 kubelet[661]: E0831 23:13:06.558380     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.218006 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:07 old-k8s-version-777320 kubelet[661]: E0831 23:13:07.557412     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.218346 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:19 old-k8s-version-777320 kubelet[661]: E0831 23:13:19.558459     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.218826 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:20 old-k8s-version-777320 kubelet[661]: E0831 23:13:20.098381     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.219254 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:25 old-k8s-version-777320 kubelet[661]: E0831 23:13:25.100789     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.221881 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:31 old-k8s-version-777320 kubelet[661]: E0831 23:13:31.566325     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:29.222317 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:39 old-k8s-version-777320 kubelet[661]: E0831 23:13:39.557527     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.222567 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:46 old-k8s-version-777320 kubelet[661]: E0831 23:13:46.576899     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.222999 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:51 old-k8s-version-777320 kubelet[661]: E0831 23:13:51.557404     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.223228 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:59 old-k8s-version-777320 kubelet[661]: E0831 23:13:59.557798     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.223904 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:06 old-k8s-version-777320 kubelet[661]: E0831 23:14:06.253495     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.224132 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:12 old-k8s-version-777320 kubelet[661]: E0831 23:14:12.562241     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.224600 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:15 old-k8s-version-777320 kubelet[661]: E0831 23:14:15.101214     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.224848 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:23 old-k8s-version-777320 kubelet[661]: E0831 23:14:23.557778     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.225290 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:29 old-k8s-version-777320 kubelet[661]: E0831 23:14:29.557518     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.225554 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:36 old-k8s-version-777320 kubelet[661]: E0831 23:14:36.558367     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.226034 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:44 old-k8s-version-777320 kubelet[661]: E0831 23:14:44.558217     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.226519 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:47 old-k8s-version-777320 kubelet[661]: E0831 23:14:47.557763     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.227154 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:58 old-k8s-version-777320 kubelet[661]: E0831 23:14:58.571753     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.230275 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:02 old-k8s-version-777320 kubelet[661]: E0831 23:15:02.567140     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:29.230707 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:13 old-k8s-version-777320 kubelet[661]: E0831 23:15:13.557443     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.230980 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:15 old-k8s-version-777320 kubelet[661]: E0831 23:15:15.557843     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.231342 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:27 old-k8s-version-777320 kubelet[661]: E0831 23:15:27.558332     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.231835 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:28 old-k8s-version-777320 kubelet[661]: E0831 23:15:28.485753     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.232186 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:35 old-k8s-version-777320 kubelet[661]: E0831 23:15:35.101354     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.232406 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:41 old-k8s-version-777320 kubelet[661]: E0831 23:15:41.558221     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.232779 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:46 old-k8s-version-777320 kubelet[661]: E0831 23:15:46.558727     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.233028 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:52 old-k8s-version-777320 kubelet[661]: E0831 23:15:52.560966     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.233399 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:00 old-k8s-version-777320 kubelet[661]: E0831 23:16:00.559529     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.233620 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:04 old-k8s-version-777320 kubelet[661]: E0831 23:16:04.557871     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.233986 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:12 old-k8s-version-777320 kubelet[661]: E0831 23:16:12.558116     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.234213 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:17 old-k8s-version-777320 kubelet[661]: E0831 23:16:17.558685     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.234579 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:26 old-k8s-version-777320 kubelet[661]: E0831 23:16:26.559804     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.234813 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:30 old-k8s-version-777320 kubelet[661]: E0831 23:16:30.558289     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.235197 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:37 old-k8s-version-777320 kubelet[661]: E0831 23:16:37.557945     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.235435 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:45 old-k8s-version-777320 kubelet[661]: E0831 23:16:45.557768     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.235812 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:50 old-k8s-version-777320 kubelet[661]: E0831 23:16:50.558846     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.236045 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:58 old-k8s-version-777320 kubelet[661]: E0831 23:16:58.561819     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.236417 1378731 logs.go:138] Found kubelet problem: Aug 31 23:17:02 old-k8s-version-777320 kubelet[661]: E0831 23:17:02.558100     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.236650 1378731 logs.go:138] Found kubelet problem: Aug 31 23:17:09 old-k8s-version-777320 kubelet[661]: E0831 23:17:09.557874     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.237021 1378731 logs.go:138] Found kubelet problem: Aug 31 23:17:17 old-k8s-version-777320 kubelet[661]: E0831 23:17:17.557481     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.237248 1378731 logs.go:138] Found kubelet problem: Aug 31 23:17:24 old-k8s-version-777320 kubelet[661]: E0831 23:17:24.557898     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0831 23:17:29.237274 1378731 logs.go:123] Gathering logs for dmesg ...
	I0831 23:17:29.237305 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 23:17:29.257013 1378731 logs.go:123] Gathering logs for describe nodes ...
	I0831 23:17:29.257041 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 23:17:29.448128 1378731 logs.go:123] Gathering logs for kube-apiserver [24d2daafe86a0dcb6af4171206676787738fec4b49e748c8e217d63f6af8bb12] ...
	I0831 23:17:29.448212 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24d2daafe86a0dcb6af4171206676787738fec4b49e748c8e217d63f6af8bb12"
	I0831 23:17:29.539734 1378731 logs.go:123] Gathering logs for coredns [83c393cb9b979e9591d3e9004c20ad7a85c3cf5a2fb01002fa02cdd21598c0ee] ...
	I0831 23:17:29.539770 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83c393cb9b979e9591d3e9004c20ad7a85c3cf5a2fb01002fa02cdd21598c0ee"
	I0831 23:17:29.591574 1378731 logs.go:123] Gathering logs for kube-scheduler [f3564f974c7c186a4cf3110fbdf83d7607a54cbd7d58484326748df57213e666] ...
	I0831 23:17:29.591602 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3564f974c7c186a4cf3110fbdf83d7607a54cbd7d58484326748df57213e666"
	I0831 23:17:29.658215 1378731 logs.go:123] Gathering logs for kindnet [0ddaf9925b5b8a2498baf923aedda0b04394c82cc0c7c18555b7233ea40ba24c] ...
	I0831 23:17:29.658241 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ddaf9925b5b8a2498baf923aedda0b04394c82cc0c7c18555b7233ea40ba24c"
	I0831 23:17:29.717457 1378731 logs.go:123] Gathering logs for storage-provisioner [f7e4d956562c970c31e64420c5803c72175a04dd6fcef52066a3ece0a6233f9f] ...
	I0831 23:17:29.717478 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e4d956562c970c31e64420c5803c72175a04dd6fcef52066a3ece0a6233f9f"
	I0831 23:17:29.781489 1378731 logs.go:123] Gathering logs for storage-provisioner [a7e7bee1e72395c1a5b201c2a16ae1e8c0725a75e13db42a8afd0bd7b61f1a6b] ...
	I0831 23:17:29.781520 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7e7bee1e72395c1a5b201c2a16ae1e8c0725a75e13db42a8afd0bd7b61f1a6b"
	I0831 23:17:29.848601 1378731 out.go:358] Setting ErrFile to fd 2...
	I0831 23:17:29.848638 1378731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 23:17:29.848687 1378731 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0831 23:17:29.848702 1378731 out.go:270]   Aug 31 23:16:58 old-k8s-version-777320 kubelet[661]: E0831 23:16:58.561819     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 31 23:16:58 old-k8s-version-777320 kubelet[661]: E0831 23:16:58.561819     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.848708 1378731 out.go:270]   Aug 31 23:17:02 old-k8s-version-777320 kubelet[661]: E0831 23:17:02.558100     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	  Aug 31 23:17:02 old-k8s-version-777320 kubelet[661]: E0831 23:17:02.558100     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.848719 1378731 out.go:270]   Aug 31 23:17:09 old-k8s-version-777320 kubelet[661]: E0831 23:17:09.557874     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 31 23:17:09 old-k8s-version-777320 kubelet[661]: E0831 23:17:09.557874     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.848725 1378731 out.go:270]   Aug 31 23:17:17 old-k8s-version-777320 kubelet[661]: E0831 23:17:17.557481     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	  Aug 31 23:17:17 old-k8s-version-777320 kubelet[661]: E0831 23:17:17.557481     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.848731 1378731 out.go:270]   Aug 31 23:17:24 old-k8s-version-777320 kubelet[661]: E0831 23:17:24.557898     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 31 23:17:24 old-k8s-version-777320 kubelet[661]: E0831 23:17:24.557898     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0831 23:17:29.848742 1378731 out.go:358] Setting ErrFile to fd 2...
	I0831 23:17:29.848748 1378731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:17:39.849201 1378731 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0831 23:17:39.865165 1378731 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0831 23:17:39.867576 1378731 out.go:201] 
	W0831 23:17:39.869895 1378731 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0831 23:17:39.870097 1378731 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0831 23:17:39.870173 1378731 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0831 23:17:39.870268 1378731 out.go:270] * 
	* 
	W0831 23:17:39.871923 1378731 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 23:17:39.874230 1378731 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-777320 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-777320
helpers_test.go:236: (dbg) docker inspect old-k8s-version-777320:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a0fb772b5847e065625edac3c79e4b2949dd304f0ad24eea8325ae73091d9423",
	        "Created": "2024-08-31T23:08:38.062309395Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1378943,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-31T23:11:31.315449048Z",
	            "FinishedAt": "2024-08-31T23:11:30.105645735Z"
	        },
	        "Image": "sha256:eb620c1d7126103417d4dc31eb6aaaf95b0878713d0303a36cb77002c31b0deb",
	        "ResolvConfPath": "/var/lib/docker/containers/a0fb772b5847e065625edac3c79e4b2949dd304f0ad24eea8325ae73091d9423/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a0fb772b5847e065625edac3c79e4b2949dd304f0ad24eea8325ae73091d9423/hostname",
	        "HostsPath": "/var/lib/docker/containers/a0fb772b5847e065625edac3c79e4b2949dd304f0ad24eea8325ae73091d9423/hosts",
	        "LogPath": "/var/lib/docker/containers/a0fb772b5847e065625edac3c79e4b2949dd304f0ad24eea8325ae73091d9423/a0fb772b5847e065625edac3c79e4b2949dd304f0ad24eea8325ae73091d9423-json.log",
	        "Name": "/old-k8s-version-777320",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-777320:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-777320",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8add83426e057939da9c6fbb8f75474c978129250571fd895e712911f5023d00-init/diff:/var/lib/docker/overlay2/e3c84f94aefed91511672b053b6e522f115b49b6c1ddbd2cec747cd29cd10f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8add83426e057939da9c6fbb8f75474c978129250571fd895e712911f5023d00/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8add83426e057939da9c6fbb8f75474c978129250571fd895e712911f5023d00/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8add83426e057939da9c6fbb8f75474c978129250571fd895e712911f5023d00/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-777320",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-777320/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-777320",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-777320",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-777320",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "359bf95e5684a6fd47bf81d741668739b6cdf2ae5ac336347ef3227c5bede6e4",
	            "SandboxKey": "/var/run/docker/netns/359bf95e5684",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34569"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34570"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34573"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34571"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34572"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-777320": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d45d3b66a7a4ac5b8b8d5d381122902efc71f6e44cafe9c01564b674a675e021",
	                    "EndpointID": "b0b1a3b2bd7eb7e346f7ee15206f5542d274c662877059da74aae468495efd4a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-777320",
	                        "a0fb772b5847"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-777320 -n old-k8s-version-777320
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-777320 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-777320 logs -n 25: (2.683501321s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-429828                              | cert-expiration-429828   | jenkins | v1.33.1 | 31 Aug 24 23:07 UTC | 31 Aug 24 23:07 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-627131                               | force-systemd-env-627131 | jenkins | v1.33.1 | 31 Aug 24 23:07 UTC | 31 Aug 24 23:07 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-627131                            | force-systemd-env-627131 | jenkins | v1.33.1 | 31 Aug 24 23:07 UTC | 31 Aug 24 23:07 UTC |
	| start   | -p cert-options-610986                                 | cert-options-610986      | jenkins | v1.33.1 | 31 Aug 24 23:07 UTC | 31 Aug 24 23:08 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-610986 ssh                                | cert-options-610986      | jenkins | v1.33.1 | 31 Aug 24 23:08 UTC | 31 Aug 24 23:08 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-610986 -- sudo                         | cert-options-610986      | jenkins | v1.33.1 | 31 Aug 24 23:08 UTC | 31 Aug 24 23:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-610986                                 | cert-options-610986      | jenkins | v1.33.1 | 31 Aug 24 23:08 UTC | 31 Aug 24 23:08 UTC |
	| start   | -p old-k8s-version-777320                              | old-k8s-version-777320   | jenkins | v1.33.1 | 31 Aug 24 23:08 UTC | 31 Aug 24 23:11 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-429828                              | cert-expiration-429828   | jenkins | v1.33.1 | 31 Aug 24 23:10 UTC | 31 Aug 24 23:11 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-429828                              | cert-expiration-429828   | jenkins | v1.33.1 | 31 Aug 24 23:11 UTC | 31 Aug 24 23:11 UTC |
	| start   | -p no-preload-039701                                   | no-preload-039701        | jenkins | v1.33.1 | 31 Aug 24 23:11 UTC | 31 Aug 24 23:12 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-777320        | old-k8s-version-777320   | jenkins | v1.33.1 | 31 Aug 24 23:11 UTC | 31 Aug 24 23:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-777320                              | old-k8s-version-777320   | jenkins | v1.33.1 | 31 Aug 24 23:11 UTC | 31 Aug 24 23:11 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-777320             | old-k8s-version-777320   | jenkins | v1.33.1 | 31 Aug 24 23:11 UTC | 31 Aug 24 23:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-777320                              | old-k8s-version-777320   | jenkins | v1.33.1 | 31 Aug 24 23:11 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-039701             | no-preload-039701        | jenkins | v1.33.1 | 31 Aug 24 23:12 UTC | 31 Aug 24 23:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-039701                                   | no-preload-039701        | jenkins | v1.33.1 | 31 Aug 24 23:12 UTC | 31 Aug 24 23:12 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-039701                  | no-preload-039701        | jenkins | v1.33.1 | 31 Aug 24 23:12 UTC | 31 Aug 24 23:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-039701                                   | no-preload-039701        | jenkins | v1.33.1 | 31 Aug 24 23:12 UTC | 31 Aug 24 23:17 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	| image   | no-preload-039701 image list                           | no-preload-039701        | jenkins | v1.33.1 | 31 Aug 24 23:17 UTC | 31 Aug 24 23:17 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-039701                                   | no-preload-039701        | jenkins | v1.33.1 | 31 Aug 24 23:17 UTC | 31 Aug 24 23:17 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-039701                                   | no-preload-039701        | jenkins | v1.33.1 | 31 Aug 24 23:17 UTC | 31 Aug 24 23:17 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-039701                                   | no-preload-039701        | jenkins | v1.33.1 | 31 Aug 24 23:17 UTC | 31 Aug 24 23:17 UTC |
	| delete  | -p no-preload-039701                                   | no-preload-039701        | jenkins | v1.33.1 | 31 Aug 24 23:17 UTC | 31 Aug 24 23:17 UTC |
	| start   | -p embed-certs-642101                                  | embed-certs-642101       | jenkins | v1.33.1 | 31 Aug 24 23:17 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 23:17:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 23:17:29.184753 1389847 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:17:29.184889 1389847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:17:29.184900 1389847 out.go:358] Setting ErrFile to fd 2...
	I0831 23:17:29.184907 1389847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:17:29.185182 1389847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
	I0831 23:17:29.185592 1389847 out.go:352] Setting JSON to false
	I0831 23:17:29.186591 1389847 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25198,"bootTime":1725121051,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0831 23:17:29.186660 1389847 start.go:139] virtualization:  
	I0831 23:17:29.190022 1389847 out.go:177] * [embed-certs-642101] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 23:17:29.193717 1389847 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 23:17:29.193772 1389847 notify.go:220] Checking for updates...
	I0831 23:17:29.198645 1389847 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 23:17:29.201017 1389847 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	I0831 23:17:29.203514 1389847 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	I0831 23:17:29.205888 1389847 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 23:17:29.208440 1389847 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 23:17:29.210917 1389847 config.go:182] Loaded profile config "old-k8s-version-777320": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0831 23:17:29.211007 1389847 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 23:17:29.256365 1389847 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 23:17:29.256490 1389847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 23:17:29.346242 1389847 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-31 23:17:29.330591836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 23:17:29.346355 1389847 docker.go:307] overlay module found
	I0831 23:17:29.348852 1389847 out.go:177] * Using the docker driver based on user configuration
	I0831 23:17:29.351031 1389847 start.go:297] selected driver: docker
	I0831 23:17:29.351057 1389847 start.go:901] validating driver "docker" against <nil>
	I0831 23:17:29.351072 1389847 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 23:17:29.351714 1389847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 23:17:29.453381 1389847 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-31 23:17:29.440161431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 23:17:29.453544 1389847 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 23:17:29.453765 1389847 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 23:17:29.456217 1389847 out.go:177] * Using Docker driver with root privileges
	I0831 23:17:29.458673 1389847 cni.go:84] Creating CNI manager for ""
	I0831 23:17:29.458696 1389847 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0831 23:17:29.458707 1389847 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0831 23:17:29.458788 1389847 start.go:340] cluster config:
	{Name:embed-certs-642101 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-642101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:17:29.460861 1389847 out.go:177] * Starting "embed-certs-642101" primary control-plane node in "embed-certs-642101" cluster
	I0831 23:17:29.462877 1389847 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0831 23:17:29.465167 1389847 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 23:17:29.466961 1389847 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0831 23:17:29.467013 1389847 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0831 23:17:29.467022 1389847 cache.go:56] Caching tarball of preloaded images
	I0831 23:17:29.467108 1389847 preload.go:172] Found /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 23:17:29.467118 1389847 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0831 23:17:29.467222 1389847 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/embed-certs-642101/config.json ...
	I0831 23:17:29.467239 1389847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/embed-certs-642101/config.json: {Name:mkfe72873c21533ddfd13c6ee833dabdbedf9987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:17:29.467341 1389847 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	W0831 23:17:29.491296 1389847 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 is of wrong architecture
	I0831 23:17:29.491314 1389847 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 23:17:29.491384 1389847 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 23:17:29.491401 1389847 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 23:17:29.491405 1389847 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 23:17:29.491413 1389847 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 23:17:29.491418 1389847 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 23:17:29.652141 1389847 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 23:17:29.652174 1389847 cache.go:194] Successfully downloaded all kic artifacts
	I0831 23:17:29.652217 1389847 start.go:360] acquireMachinesLock for embed-certs-642101: {Name:mk89ae457c6278bd9e003059d7535244d6329bfc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:17:29.653860 1389847 start.go:364] duration metric: took 1.615838ms to acquireMachinesLock for "embed-certs-642101"
	I0831 23:17:29.653911 1389847 start.go:93] Provisioning new machine with config: &{Name:embed-certs-642101 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-642101 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0831 23:17:29.654014 1389847 start.go:125] createHost starting for "" (driver="docker")
	I0831 23:17:27.478286 1378731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 23:17:27.490816 1378731 api_server.go:72] duration metric: took 5m48.826929616s to wait for apiserver process to appear ...
	I0831 23:17:27.490838 1378731 api_server.go:88] waiting for apiserver healthz status ...
	I0831 23:17:27.490882 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0831 23:17:27.490938 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 23:17:27.569029 1378731 cri.go:89] found id: "2b314ddb4c1637e9a96260a3016921d4c647744eac2a23a86ba97ac80539955e"
	I0831 23:17:27.569048 1378731 cri.go:89] found id: "24d2daafe86a0dcb6af4171206676787738fec4b49e748c8e217d63f6af8bb12"
	I0831 23:17:27.569053 1378731 cri.go:89] found id: ""
	I0831 23:17:27.569060 1378731 logs.go:276] 2 containers: [2b314ddb4c1637e9a96260a3016921d4c647744eac2a23a86ba97ac80539955e 24d2daafe86a0dcb6af4171206676787738fec4b49e748c8e217d63f6af8bb12]
	I0831 23:17:27.569113 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.573327 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.577199 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0831 23:17:27.577262 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 23:17:27.626986 1378731 cri.go:89] found id: "adc270d3e8398b7a86ff787dfd6fa155a7deeb47c29d94cf0371e7f3af2cf66a"
	I0831 23:17:27.627004 1378731 cri.go:89] found id: "36189b23eed4dd70cc8796dc64533ece0f97c8515188d9c4bb6817079cf848fa"
	I0831 23:17:27.627009 1378731 cri.go:89] found id: ""
	I0831 23:17:27.627017 1378731 logs.go:276] 2 containers: [adc270d3e8398b7a86ff787dfd6fa155a7deeb47c29d94cf0371e7f3af2cf66a 36189b23eed4dd70cc8796dc64533ece0f97c8515188d9c4bb6817079cf848fa]
	I0831 23:17:27.627071 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.631361 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.635643 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0831 23:17:27.635706 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 23:17:27.683136 1378731 cri.go:89] found id: "83c393cb9b979e9591d3e9004c20ad7a85c3cf5a2fb01002fa02cdd21598c0ee"
	I0831 23:17:27.683156 1378731 cri.go:89] found id: "26aa1d510c36cb526536c021669ae3f5436cdad4f0a6f9c30b4b361249af6077"
	I0831 23:17:27.683160 1378731 cri.go:89] found id: ""
	I0831 23:17:27.683167 1378731 logs.go:276] 2 containers: [83c393cb9b979e9591d3e9004c20ad7a85c3cf5a2fb01002fa02cdd21598c0ee 26aa1d510c36cb526536c021669ae3f5436cdad4f0a6f9c30b4b361249af6077]
	I0831 23:17:27.683221 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.687219 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.692442 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0831 23:17:27.692509 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 23:17:27.789373 1378731 cri.go:89] found id: "f3564f974c7c186a4cf3110fbdf83d7607a54cbd7d58484326748df57213e666"
	I0831 23:17:27.789445 1378731 cri.go:89] found id: "cb403497717362a835119a12fcd9a98f048e1513652343987c5706732ded954d"
	I0831 23:17:27.789465 1378731 cri.go:89] found id: ""
	I0831 23:17:27.789493 1378731 logs.go:276] 2 containers: [f3564f974c7c186a4cf3110fbdf83d7607a54cbd7d58484326748df57213e666 cb403497717362a835119a12fcd9a98f048e1513652343987c5706732ded954d]
	I0831 23:17:27.789562 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.799289 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.803458 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0831 23:17:27.803580 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 23:17:27.870602 1378731 cri.go:89] found id: "a630dbfa8aa905e5ea3326c49649056da78bba5cfd2beda22ff0f2f93515a197"
	I0831 23:17:27.870624 1378731 cri.go:89] found id: "375dfe16bc1a3f88bb1adae552829f1de1f6ccd7a3b2bf3eb9e823c85daa3329"
	I0831 23:17:27.870629 1378731 cri.go:89] found id: ""
	I0831 23:17:27.870636 1378731 logs.go:276] 2 containers: [a630dbfa8aa905e5ea3326c49649056da78bba5cfd2beda22ff0f2f93515a197 375dfe16bc1a3f88bb1adae552829f1de1f6ccd7a3b2bf3eb9e823c85daa3329]
	I0831 23:17:27.870694 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.874793 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.878737 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 23:17:27.878804 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 23:17:27.960687 1378731 cri.go:89] found id: "2ba9272459fbd3d5920c42ffd67f3fc7be523ddb8abd1b3e2f8db38f6db5a2bd"
	I0831 23:17:27.960708 1378731 cri.go:89] found id: "4a53de8c7cfef15e1b4d2eb2d08e3992e87a5d45cb56ec1579cce90b650a86a3"
	I0831 23:17:27.960713 1378731 cri.go:89] found id: ""
	I0831 23:17:27.960720 1378731 logs.go:276] 2 containers: [2ba9272459fbd3d5920c42ffd67f3fc7be523ddb8abd1b3e2f8db38f6db5a2bd 4a53de8c7cfef15e1b4d2eb2d08e3992e87a5d45cb56ec1579cce90b650a86a3]
	I0831 23:17:27.960779 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.964573 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:27.969609 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0831 23:17:27.969674 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 23:17:28.029658 1378731 cri.go:89] found id: "e7bba7657fd955c1b7ffaed6f8954f4add6c68cbbebb5450b878b68fecc3dfd4"
	I0831 23:17:28.029682 1378731 cri.go:89] found id: "0ddaf9925b5b8a2498baf923aedda0b04394c82cc0c7c18555b7233ea40ba24c"
	I0831 23:17:28.029689 1378731 cri.go:89] found id: ""
	I0831 23:17:28.029696 1378731 logs.go:276] 2 containers: [e7bba7657fd955c1b7ffaed6f8954f4add6c68cbbebb5450b878b68fecc3dfd4 0ddaf9925b5b8a2498baf923aedda0b04394c82cc0c7c18555b7233ea40ba24c]
	I0831 23:17:28.029760 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:28.034482 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:28.039042 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0831 23:17:28.039128 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0831 23:17:28.089883 1378731 cri.go:89] found id: "3a128b2ebef5226632792617aecdd7d9fa214ff983a449971d7ccdfab3a99f21"
	I0831 23:17:28.089909 1378731 cri.go:89] found id: ""
	I0831 23:17:28.089923 1378731 logs.go:276] 1 containers: [3a128b2ebef5226632792617aecdd7d9fa214ff983a449971d7ccdfab3a99f21]
	I0831 23:17:28.090037 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:28.094024 1378731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0831 23:17:28.094139 1378731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0831 23:17:28.148204 1378731 cri.go:89] found id: "f7e4d956562c970c31e64420c5803c72175a04dd6fcef52066a3ece0a6233f9f"
	I0831 23:17:28.148263 1378731 cri.go:89] found id: "a7e7bee1e72395c1a5b201c2a16ae1e8c0725a75e13db42a8afd0bd7b61f1a6b"
	I0831 23:17:28.148292 1378731 cri.go:89] found id: ""
	I0831 23:17:28.148313 1378731 logs.go:276] 2 containers: [f7e4d956562c970c31e64420c5803c72175a04dd6fcef52066a3ece0a6233f9f a7e7bee1e72395c1a5b201c2a16ae1e8c0725a75e13db42a8afd0bd7b61f1a6b]
	I0831 23:17:28.148397 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:28.153035 1378731 ssh_runner.go:195] Run: which crictl
	I0831 23:17:28.157041 1378731 logs.go:123] Gathering logs for etcd [36189b23eed4dd70cc8796dc64533ece0f97c8515188d9c4bb6817079cf848fa] ...
	I0831 23:17:28.157118 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36189b23eed4dd70cc8796dc64533ece0f97c8515188d9c4bb6817079cf848fa"
	I0831 23:17:28.219799 1378731 logs.go:123] Gathering logs for containerd ...
	I0831 23:17:28.219874 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0831 23:17:28.289711 1378731 logs.go:123] Gathering logs for etcd [adc270d3e8398b7a86ff787dfd6fa155a7deeb47c29d94cf0371e7f3af2cf66a] ...
	I0831 23:17:28.289790 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adc270d3e8398b7a86ff787dfd6fa155a7deeb47c29d94cf0371e7f3af2cf66a"
	I0831 23:17:28.344442 1378731 logs.go:123] Gathering logs for kube-controller-manager [4a53de8c7cfef15e1b4d2eb2d08e3992e87a5d45cb56ec1579cce90b650a86a3] ...
	I0831 23:17:28.344473 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a53de8c7cfef15e1b4d2eb2d08e3992e87a5d45cb56ec1579cce90b650a86a3"
	I0831 23:17:28.417370 1378731 logs.go:123] Gathering logs for kubernetes-dashboard [3a128b2ebef5226632792617aecdd7d9fa214ff983a449971d7ccdfab3a99f21] ...
	I0831 23:17:28.417462 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a128b2ebef5226632792617aecdd7d9fa214ff983a449971d7ccdfab3a99f21"
	I0831 23:17:28.475114 1378731 logs.go:123] Gathering logs for container status ...
	I0831 23:17:28.475141 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 23:17:28.555107 1378731 logs.go:123] Gathering logs for kube-apiserver [2b314ddb4c1637e9a96260a3016921d4c647744eac2a23a86ba97ac80539955e] ...
	I0831 23:17:28.555136 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b314ddb4c1637e9a96260a3016921d4c647744eac2a23a86ba97ac80539955e"
	I0831 23:17:28.652948 1378731 logs.go:123] Gathering logs for coredns [26aa1d510c36cb526536c021669ae3f5436cdad4f0a6f9c30b4b361249af6077] ...
	I0831 23:17:28.653044 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26aa1d510c36cb526536c021669ae3f5436cdad4f0a6f9c30b4b361249af6077"
	I0831 23:17:28.718275 1378731 logs.go:123] Gathering logs for kube-scheduler [cb403497717362a835119a12fcd9a98f048e1513652343987c5706732ded954d] ...
	I0831 23:17:28.718299 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb403497717362a835119a12fcd9a98f048e1513652343987c5706732ded954d"
	I0831 23:17:28.776545 1378731 logs.go:123] Gathering logs for kube-proxy [a630dbfa8aa905e5ea3326c49649056da78bba5cfd2beda22ff0f2f93515a197] ...
	I0831 23:17:28.776572 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a630dbfa8aa905e5ea3326c49649056da78bba5cfd2beda22ff0f2f93515a197"
	I0831 23:17:28.830806 1378731 logs.go:123] Gathering logs for kube-proxy [375dfe16bc1a3f88bb1adae552829f1de1f6ccd7a3b2bf3eb9e823c85daa3329] ...
	I0831 23:17:28.830831 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 375dfe16bc1a3f88bb1adae552829f1de1f6ccd7a3b2bf3eb9e823c85daa3329"
	I0831 23:17:28.953408 1378731 logs.go:123] Gathering logs for kube-controller-manager [2ba9272459fbd3d5920c42ffd67f3fc7be523ddb8abd1b3e2f8db38f6db5a2bd] ...
	I0831 23:17:28.953438 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9272459fbd3d5920c42ffd67f3fc7be523ddb8abd1b3e2f8db38f6db5a2bd"
	I0831 23:17:29.052029 1378731 logs.go:123] Gathering logs for kindnet [e7bba7657fd955c1b7ffaed6f8954f4add6c68cbbebb5450b878b68fecc3dfd4] ...
	I0831 23:17:29.052112 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bba7657fd955c1b7ffaed6f8954f4add6c68cbbebb5450b878b68fecc3dfd4"
	I0831 23:17:29.132067 1378731 logs.go:123] Gathering logs for kubelet ...
	I0831 23:17:29.132142 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 23:17:29.205613 1378731 logs.go:138] Found kubelet problem: Aug 31 23:11:59 old-k8s-version-777320 kubelet[661]: E0831 23:11:59.886301     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:29.206125 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:00 old-k8s-version-777320 kubelet[661]: E0831 23:12:00.799346     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.209084 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:16 old-k8s-version-777320 kubelet[661]: E0831 23:12:16.573045     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:29.211702 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:28 old-k8s-version-777320 kubelet[661]: E0831 23:12:28.934864     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.212063 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:29 old-k8s-version-777320 kubelet[661]: E0831 23:12:29.938937     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.212274 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:30 old-k8s-version-777320 kubelet[661]: E0831 23:12:30.563156     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.212793 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:30 old-k8s-version-777320 kubelet[661]: E0831 23:12:30.944101     661 pod_workers.go:191] Error syncing pod a63ab31c-0052-473f-8538-7ccd4026e42f ("storage-provisioner_kube-system(a63ab31c-0052-473f-8538-7ccd4026e42f)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a63ab31c-0052-473f-8538-7ccd4026e42f)"
	W0831 23:17:29.213242 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:35 old-k8s-version-777320 kubelet[661]: E0831 23:12:35.100827     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.216103 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:41 old-k8s-version-777320 kubelet[661]: E0831 23:12:41.565180     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:29.216741 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:50 old-k8s-version-777320 kubelet[661]: E0831 23:12:50.005945     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.217085 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:53 old-k8s-version-777320 kubelet[661]: E0831 23:12:53.558398     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.217442 1378731 logs.go:138] Found kubelet problem: Aug 31 23:12:55 old-k8s-version-777320 kubelet[661]: E0831 23:12:55.108037     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.217651 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:06 old-k8s-version-777320 kubelet[661]: E0831 23:13:06.558380     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.218006 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:07 old-k8s-version-777320 kubelet[661]: E0831 23:13:07.557412     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.218346 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:19 old-k8s-version-777320 kubelet[661]: E0831 23:13:19.558459     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.218826 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:20 old-k8s-version-777320 kubelet[661]: E0831 23:13:20.098381     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.219254 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:25 old-k8s-version-777320 kubelet[661]: E0831 23:13:25.100789     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.221881 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:31 old-k8s-version-777320 kubelet[661]: E0831 23:13:31.566325     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:29.222317 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:39 old-k8s-version-777320 kubelet[661]: E0831 23:13:39.557527     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.222567 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:46 old-k8s-version-777320 kubelet[661]: E0831 23:13:46.576899     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.222999 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:51 old-k8s-version-777320 kubelet[661]: E0831 23:13:51.557404     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.223228 1378731 logs.go:138] Found kubelet problem: Aug 31 23:13:59 old-k8s-version-777320 kubelet[661]: E0831 23:13:59.557798     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.223904 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:06 old-k8s-version-777320 kubelet[661]: E0831 23:14:06.253495     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.224132 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:12 old-k8s-version-777320 kubelet[661]: E0831 23:14:12.562241     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.224600 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:15 old-k8s-version-777320 kubelet[661]: E0831 23:14:15.101214     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.224848 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:23 old-k8s-version-777320 kubelet[661]: E0831 23:14:23.557778     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.225290 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:29 old-k8s-version-777320 kubelet[661]: E0831 23:14:29.557518     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.225554 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:36 old-k8s-version-777320 kubelet[661]: E0831 23:14:36.558367     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.226034 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:44 old-k8s-version-777320 kubelet[661]: E0831 23:14:44.558217     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.226519 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:47 old-k8s-version-777320 kubelet[661]: E0831 23:14:47.557763     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.227154 1378731 logs.go:138] Found kubelet problem: Aug 31 23:14:58 old-k8s-version-777320 kubelet[661]: E0831 23:14:58.571753     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.230275 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:02 old-k8s-version-777320 kubelet[661]: E0831 23:15:02.567140     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0831 23:17:29.230707 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:13 old-k8s-version-777320 kubelet[661]: E0831 23:15:13.557443     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.230980 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:15 old-k8s-version-777320 kubelet[661]: E0831 23:15:15.557843     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.231342 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:27 old-k8s-version-777320 kubelet[661]: E0831 23:15:27.558332     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.231835 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:28 old-k8s-version-777320 kubelet[661]: E0831 23:15:28.485753     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.232186 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:35 old-k8s-version-777320 kubelet[661]: E0831 23:15:35.101354     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.232406 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:41 old-k8s-version-777320 kubelet[661]: E0831 23:15:41.558221     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.232779 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:46 old-k8s-version-777320 kubelet[661]: E0831 23:15:46.558727     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.233028 1378731 logs.go:138] Found kubelet problem: Aug 31 23:15:52 old-k8s-version-777320 kubelet[661]: E0831 23:15:52.560966     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.233399 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:00 old-k8s-version-777320 kubelet[661]: E0831 23:16:00.559529     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.233620 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:04 old-k8s-version-777320 kubelet[661]: E0831 23:16:04.557871     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.233986 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:12 old-k8s-version-777320 kubelet[661]: E0831 23:16:12.558116     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.234213 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:17 old-k8s-version-777320 kubelet[661]: E0831 23:16:17.558685     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.234579 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:26 old-k8s-version-777320 kubelet[661]: E0831 23:16:26.559804     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.234813 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:30 old-k8s-version-777320 kubelet[661]: E0831 23:16:30.558289     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.235197 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:37 old-k8s-version-777320 kubelet[661]: E0831 23:16:37.557945     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.235435 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:45 old-k8s-version-777320 kubelet[661]: E0831 23:16:45.557768     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.235812 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:50 old-k8s-version-777320 kubelet[661]: E0831 23:16:50.558846     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.236045 1378731 logs.go:138] Found kubelet problem: Aug 31 23:16:58 old-k8s-version-777320 kubelet[661]: E0831 23:16:58.561819     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.236417 1378731 logs.go:138] Found kubelet problem: Aug 31 23:17:02 old-k8s-version-777320 kubelet[661]: E0831 23:17:02.558100     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.236650 1378731 logs.go:138] Found kubelet problem: Aug 31 23:17:09 old-k8s-version-777320 kubelet[661]: E0831 23:17:09.557874     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.237021 1378731 logs.go:138] Found kubelet problem: Aug 31 23:17:17 old-k8s-version-777320 kubelet[661]: E0831 23:17:17.557481     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.237248 1378731 logs.go:138] Found kubelet problem: Aug 31 23:17:24 old-k8s-version-777320 kubelet[661]: E0831 23:17:24.557898     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0831 23:17:29.237274 1378731 logs.go:123] Gathering logs for dmesg ...
	I0831 23:17:29.237305 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 23:17:29.257013 1378731 logs.go:123] Gathering logs for describe nodes ...
	I0831 23:17:29.257041 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 23:17:29.448128 1378731 logs.go:123] Gathering logs for kube-apiserver [24d2daafe86a0dcb6af4171206676787738fec4b49e748c8e217d63f6af8bb12] ...
	I0831 23:17:29.448212 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24d2daafe86a0dcb6af4171206676787738fec4b49e748c8e217d63f6af8bb12"
	I0831 23:17:29.539734 1378731 logs.go:123] Gathering logs for coredns [83c393cb9b979e9591d3e9004c20ad7a85c3cf5a2fb01002fa02cdd21598c0ee] ...
	I0831 23:17:29.539770 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83c393cb9b979e9591d3e9004c20ad7a85c3cf5a2fb01002fa02cdd21598c0ee"
	I0831 23:17:29.591574 1378731 logs.go:123] Gathering logs for kube-scheduler [f3564f974c7c186a4cf3110fbdf83d7607a54cbd7d58484326748df57213e666] ...
	I0831 23:17:29.591602 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3564f974c7c186a4cf3110fbdf83d7607a54cbd7d58484326748df57213e666"
	I0831 23:17:29.658215 1378731 logs.go:123] Gathering logs for kindnet [0ddaf9925b5b8a2498baf923aedda0b04394c82cc0c7c18555b7233ea40ba24c] ...
	I0831 23:17:29.658241 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ddaf9925b5b8a2498baf923aedda0b04394c82cc0c7c18555b7233ea40ba24c"
	I0831 23:17:29.717457 1378731 logs.go:123] Gathering logs for storage-provisioner [f7e4d956562c970c31e64420c5803c72175a04dd6fcef52066a3ece0a6233f9f] ...
	I0831 23:17:29.717478 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e4d956562c970c31e64420c5803c72175a04dd6fcef52066a3ece0a6233f9f"
	I0831 23:17:29.781489 1378731 logs.go:123] Gathering logs for storage-provisioner [a7e7bee1e72395c1a5b201c2a16ae1e8c0725a75e13db42a8afd0bd7b61f1a6b] ...
	I0831 23:17:29.781520 1378731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7e7bee1e72395c1a5b201c2a16ae1e8c0725a75e13db42a8afd0bd7b61f1a6b"
	I0831 23:17:29.848601 1378731 out.go:358] Setting ErrFile to fd 2...
	I0831 23:17:29.848638 1378731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 23:17:29.848687 1378731 out.go:270] X Problems detected in kubelet:
	W0831 23:17:29.848702 1378731 out.go:270]   Aug 31 23:16:58 old-k8s-version-777320 kubelet[661]: E0831 23:16:58.561819     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.848708 1378731 out.go:270]   Aug 31 23:17:02 old-k8s-version-777320 kubelet[661]: E0831 23:17:02.558100     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.848719 1378731 out.go:270]   Aug 31 23:17:09 old-k8s-version-777320 kubelet[661]: E0831 23:17:09.557874     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0831 23:17:29.848725 1378731 out.go:270]   Aug 31 23:17:17 old-k8s-version-777320 kubelet[661]: E0831 23:17:17.557481     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	W0831 23:17:29.848731 1378731 out.go:270]   Aug 31 23:17:24 old-k8s-version-777320 kubelet[661]: E0831 23:17:24.557898     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0831 23:17:29.848742 1378731 out.go:358] Setting ErrFile to fd 2...
	I0831 23:17:29.848748 1378731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:17:29.656572 1389847 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0831 23:17:29.656896 1389847 start.go:159] libmachine.API.Create for "embed-certs-642101" (driver="docker")
	I0831 23:17:29.656931 1389847 client.go:168] LocalClient.Create starting
	I0831 23:17:29.657003 1389847 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem
	I0831 23:17:29.657036 1389847 main.go:141] libmachine: Decoding PEM data...
	I0831 23:17:29.657054 1389847 main.go:141] libmachine: Parsing certificate...
	I0831 23:17:29.657113 1389847 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/cert.pem
	I0831 23:17:29.657133 1389847 main.go:141] libmachine: Decoding PEM data...
	I0831 23:17:29.657143 1389847 main.go:141] libmachine: Parsing certificate...
	I0831 23:17:29.657584 1389847 cli_runner.go:164] Run: docker network inspect embed-certs-642101 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0831 23:17:29.676805 1389847 cli_runner.go:211] docker network inspect embed-certs-642101 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0831 23:17:29.676903 1389847 network_create.go:284] running [docker network inspect embed-certs-642101] to gather additional debugging logs...
	I0831 23:17:29.676921 1389847 cli_runner.go:164] Run: docker network inspect embed-certs-642101
	W0831 23:17:29.691275 1389847 cli_runner.go:211] docker network inspect embed-certs-642101 returned with exit code 1
	I0831 23:17:29.691317 1389847 network_create.go:287] error running [docker network inspect embed-certs-642101]: docker network inspect embed-certs-642101: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-642101 not found
	I0831 23:17:29.691332 1389847 network_create.go:289] output of [docker network inspect embed-certs-642101]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-642101 not found
	
	** /stderr **
	I0831 23:17:29.691475 1389847 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 23:17:29.715432 1389847 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-180241d62096 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0a:e1:60:90} reservation:<nil>}
	I0831 23:17:29.715862 1389847 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-12d09e5d3a70 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:0e:d7:4f:4e} reservation:<nil>}
	I0831 23:17:29.716277 1389847 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-884799888d32 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:df:89:a0:61} reservation:<nil>}
	I0831 23:17:29.716836 1389847 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018e8780}
	I0831 23:17:29.716857 1389847 network_create.go:124] attempt to create docker network embed-certs-642101 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0831 23:17:29.716921 1389847 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-642101 embed-certs-642101
	I0831 23:17:29.808542 1389847 network_create.go:108] docker network embed-certs-642101 192.168.76.0/24 created
	I0831 23:17:29.808572 1389847 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-642101" container
	I0831 23:17:29.808927 1389847 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0831 23:17:29.843585 1389847 cli_runner.go:164] Run: docker volume create embed-certs-642101 --label name.minikube.sigs.k8s.io=embed-certs-642101 --label created_by.minikube.sigs.k8s.io=true
	I0831 23:17:29.863085 1389847 oci.go:103] Successfully created a docker volume embed-certs-642101
	I0831 23:17:29.863172 1389847 cli_runner.go:164] Run: docker run --rm --name embed-certs-642101-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-642101 --entrypoint /usr/bin/test -v embed-certs-642101:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib
	I0831 23:17:30.546493 1389847 oci.go:107] Successfully prepared a docker volume embed-certs-642101
	I0831 23:17:30.546576 1389847 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0831 23:17:30.546601 1389847 kic.go:194] Starting extracting preloaded images to volume ...
	I0831 23:17:30.546686 1389847 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-642101:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0831 23:17:35.174991 1389847 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-642101:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.628265716s)
	I0831 23:17:35.175024 1389847 kic.go:203] duration metric: took 4.628418396s to extract preloaded images to volume ...
	W0831 23:17:35.175174 1389847 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0831 23:17:35.175296 1389847 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0831 23:17:35.232048 1389847 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-642101 --name embed-certs-642101 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-642101 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-642101 --network embed-certs-642101 --ip 192.168.76.2 --volume embed-certs-642101:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0
	I0831 23:17:35.625717 1389847 cli_runner.go:164] Run: docker container inspect embed-certs-642101 --format={{.State.Running}}
	I0831 23:17:35.645201 1389847 cli_runner.go:164] Run: docker container inspect embed-certs-642101 --format={{.State.Status}}
	I0831 23:17:35.665213 1389847 cli_runner.go:164] Run: docker exec embed-certs-642101 stat /var/lib/dpkg/alternatives/iptables
	I0831 23:17:35.738161 1389847 oci.go:144] the created container "embed-certs-642101" has a running status.
	I0831 23:17:35.738192 1389847 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-1161402/.minikube/machines/embed-certs-642101/id_rsa...
	I0831 23:17:36.441543 1389847 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-1161402/.minikube/machines/embed-certs-642101/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0831 23:17:36.464802 1389847 cli_runner.go:164] Run: docker container inspect embed-certs-642101 --format={{.State.Status}}
	I0831 23:17:36.482393 1389847 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0831 23:17:36.482426 1389847 kic_runner.go:114] Args: [docker exec --privileged embed-certs-642101 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0831 23:17:36.540094 1389847 cli_runner.go:164] Run: docker container inspect embed-certs-642101 --format={{.State.Status}}
	I0831 23:17:36.580561 1389847 machine.go:93] provisionDockerMachine start ...
	I0831 23:17:36.580660 1389847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-642101
	I0831 23:17:36.599422 1389847 main.go:141] libmachine: Using SSH client type: native
	I0831 23:17:36.600025 1389847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34579 <nil> <nil>}
	I0831 23:17:36.600060 1389847 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 23:17:36.745648 1389847 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-642101
	
	I0831 23:17:36.745675 1389847 ubuntu.go:169] provisioning hostname "embed-certs-642101"
	I0831 23:17:36.745740 1389847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-642101
	I0831 23:17:36.765411 1389847 main.go:141] libmachine: Using SSH client type: native
	I0831 23:17:36.765662 1389847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34579 <nil> <nil>}
	I0831 23:17:36.765674 1389847 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-642101 && echo "embed-certs-642101" | sudo tee /etc/hostname
	I0831 23:17:36.934910 1389847 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-642101
	
	I0831 23:17:36.934992 1389847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-642101
	I0831 23:17:36.958333 1389847 main.go:141] libmachine: Using SSH client type: native
	I0831 23:17:36.958619 1389847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34579 <nil> <nil>}
	I0831 23:17:36.958646 1389847 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-642101' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-642101/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-642101' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 23:17:37.136887 1389847 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 23:17:37.136916 1389847 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-1161402/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-1161402/.minikube}
	I0831 23:17:37.136955 1389847 ubuntu.go:177] setting up certificates
	I0831 23:17:37.136964 1389847 provision.go:84] configureAuth start
	I0831 23:17:37.137034 1389847 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "embed-certs-642101")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-642101
	I0831 23:17:37.161563 1389847 provision.go:143] copyHostCerts
	I0831 23:17:37.161633 1389847 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.pem, removing ...
	I0831 23:17:37.161647 1389847 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.pem
	I0831 23:17:37.161715 1389847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-1161402/.minikube/ca.pem (1078 bytes)
	I0831 23:17:37.161804 1389847 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-1161402/.minikube/cert.pem, removing ...
	I0831 23:17:37.161815 1389847 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-1161402/.minikube/cert.pem
	I0831 23:17:37.161841 1389847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-1161402/.minikube/cert.pem (1123 bytes)
	I0831 23:17:37.161910 1389847 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-1161402/.minikube/key.pem, removing ...
	I0831 23:17:37.161918 1389847 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-1161402/.minikube/key.pem
	I0831 23:17:37.161944 1389847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-1161402/.minikube/key.pem (1679 bytes)
	I0831 23:17:37.162004 1389847 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca-key.pem org=jenkins.embed-certs-642101 san=[127.0.0.1 192.168.76.2 embed-certs-642101 localhost minikube]
	I0831 23:17:37.863643 1389847 provision.go:177] copyRemoteCerts
	I0831 23:17:37.863727 1389847 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 23:17:37.863772 1389847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-642101
	I0831 23:17:37.880725 1389847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/embed-certs-642101/id_rsa Username:docker}
	I0831 23:17:37.978357 1389847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0831 23:17:38.005396 1389847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0831 23:17:38.047384 1389847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 23:17:38.078132 1389847 provision.go:87] duration metric: took 941.148253ms to configureAuth
	I0831 23:17:38.078161 1389847 ubuntu.go:193] setting minikube options for container-runtime
	I0831 23:17:38.078363 1389847 config.go:182] Loaded profile config "embed-certs-642101": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0831 23:17:38.078372 1389847 machine.go:96] duration metric: took 1.497790823s to provisionDockerMachine
	I0831 23:17:38.078379 1389847 client.go:171] duration metric: took 8.42144202s to LocalClient.Create
	I0831 23:17:38.078402 1389847 start.go:167] duration metric: took 8.421507686s to libmachine.API.Create "embed-certs-642101"
	I0831 23:17:38.078412 1389847 start.go:293] postStartSetup for "embed-certs-642101" (driver="docker")
	I0831 23:17:38.078422 1389847 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 23:17:38.078478 1389847 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 23:17:38.078521 1389847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-642101
	I0831 23:17:38.097216 1389847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/embed-certs-642101/id_rsa Username:docker}
	I0831 23:17:38.194416 1389847 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 23:17:38.197946 1389847 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 23:17:38.198001 1389847 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 23:17:38.198020 1389847 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 23:17:38.198028 1389847 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 23:17:38.198041 1389847 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-1161402/.minikube/addons for local assets ...
	I0831 23:17:38.198108 1389847 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-1161402/.minikube/files for local assets ...
	I0831 23:17:38.198195 1389847 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-1161402/.minikube/files/etc/ssl/certs/11667852.pem -> 11667852.pem in /etc/ssl/certs
	I0831 23:17:38.198303 1389847 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 23:17:38.207736 1389847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-1161402/.minikube/files/etc/ssl/certs/11667852.pem --> /etc/ssl/certs/11667852.pem (1708 bytes)
	I0831 23:17:38.234865 1389847 start.go:296] duration metric: took 156.439001ms for postStartSetup
	I0831 23:17:38.235254 1389847 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "embed-certs-642101")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-642101
	I0831 23:17:38.252094 1389847 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/embed-certs-642101/config.json ...
	I0831 23:17:38.252682 1389847 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 23:17:38.252739 1389847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-642101
	I0831 23:17:38.269397 1389847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/embed-certs-642101/id_rsa Username:docker}
	I0831 23:17:38.361813 1389847 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 23:17:38.366384 1389847 start.go:128] duration metric: took 8.71235102s to createHost
	I0831 23:17:38.366408 1389847 start.go:83] releasing machines lock for "embed-certs-642101", held for 8.712521703s
	I0831 23:17:38.366478 1389847 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "embed-certs-642101")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-642101
	I0831 23:17:38.384055 1389847 ssh_runner.go:195] Run: cat /version.json
	I0831 23:17:38.384117 1389847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-642101
	I0831 23:17:38.384445 1389847 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 23:17:38.384501 1389847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-642101
	I0831 23:17:38.404672 1389847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/embed-certs-642101/id_rsa Username:docker}
	I0831 23:17:38.414498 1389847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/embed-certs-642101/id_rsa Username:docker}
	I0831 23:17:38.496120 1389847 ssh_runner.go:195] Run: systemctl --version
	I0831 23:17:38.628296 1389847 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 23:17:38.632544 1389847 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0831 23:17:38.658407 1389847 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0831 23:17:38.658486 1389847 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:17:38.688723 1389847 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0831 23:17:38.688749 1389847 start.go:495] detecting cgroup driver to use...
	I0831 23:17:38.688782 1389847 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 23:17:38.688857 1389847 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0831 23:17:38.701220 1389847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0831 23:17:38.712501 1389847 docker.go:217] disabling cri-docker service (if available) ...
	I0831 23:17:38.712565 1389847 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 23:17:38.726520 1389847 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 23:17:38.741171 1389847 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 23:17:38.836585 1389847 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 23:17:38.934914 1389847 docker.go:233] disabling docker service ...
	I0831 23:17:38.935014 1389847 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 23:17:38.957796 1389847 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 23:17:38.970666 1389847 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 23:17:39.095636 1389847 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 23:17:39.201131 1389847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 23:17:39.213822 1389847 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 23:17:39.230984 1389847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0831 23:17:39.242054 1389847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0831 23:17:39.252649 1389847 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0831 23:17:39.252719 1389847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0831 23:17:39.263746 1389847 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 23:17:39.274821 1389847 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0831 23:17:39.285173 1389847 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 23:17:39.295451 1389847 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 23:17:39.305998 1389847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0831 23:17:39.315602 1389847 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0831 23:17:39.325212 1389847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0831 23:17:39.335576 1389847 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 23:17:39.344101 1389847 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 23:17:39.352824 1389847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:17:39.443291 1389847 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0831 23:17:39.587650 1389847 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0831 23:17:39.587792 1389847 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0831 23:17:39.591455 1389847 start.go:563] Will wait 60s for crictl version
	I0831 23:17:39.591567 1389847 ssh_runner.go:195] Run: which crictl
	I0831 23:17:39.594905 1389847 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 23:17:39.632378 1389847 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.21
	RuntimeApiVersion:  v1
	I0831 23:17:39.632500 1389847 ssh_runner.go:195] Run: containerd --version
	I0831 23:17:39.658853 1389847 ssh_runner.go:195] Run: containerd --version
	I0831 23:17:39.686747 1389847 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.21 ...
	I0831 23:17:39.849201 1378731 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0831 23:17:39.865165 1378731 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0831 23:17:39.867576 1378731 out.go:201] 
	W0831 23:17:39.869895 1378731 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0831 23:17:39.870097 1378731 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0831 23:17:39.870173 1378731 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0831 23:17:39.870268 1378731 out.go:270] * 
	W0831 23:17:39.871923 1378731 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 23:17:39.874230 1378731 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	c2de4a0a64d9d       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   7d62dde8e02e6       dashboard-metrics-scraper-8d5bb5db8-klp6x
	f7e4d956562c9       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   ef529c5ec9d62       storage-provisioner
	3a128b2ebef52       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   42c2ce80f7590       kubernetes-dashboard-cd95d586-m6vqc
	83c393cb9b979       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   e2c8a901df77a       coredns-74ff55c5b-9zjlw
	7983f300cc438       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   eb34053df2a01       busybox
	a7e7bee1e7239       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   ef529c5ec9d62       storage-provisioner
	a630dbfa8aa90       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   75dbdd8273c83       kube-proxy-wv4m2
	e7bba7657fd95       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   5982b1f353096       kindnet-ww8r9
	f3564f974c7c1       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   a51d6cde483fe       kube-scheduler-old-k8s-version-777320
	2ba9272459fbd       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   527cef79ab943       kube-controller-manager-old-k8s-version-777320
	2b314ddb4c163       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   8ec6592e666a5       kube-apiserver-old-k8s-version-777320
	adc270d3e8398       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   bb139b4e7bce4       etcd-old-k8s-version-777320
	d193ce9f04406       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   5d450d668c301       busybox
	26aa1d510c36c       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   cb5b195b1ace9       coredns-74ff55c5b-9zjlw
	0ddaf9925b5b8       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   e0407d6abf40e       kindnet-ww8r9
	375dfe16bc1a3       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   1148dc69eaa72       kube-proxy-wv4m2
	36189b23eed4d       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   677bb9ccead83       etcd-old-k8s-version-777320
	cb40349771736       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   d095b9e73e8ea       kube-scheduler-old-k8s-version-777320
	4a53de8c7cfef       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   ff92a6a21a4a5       kube-controller-manager-old-k8s-version-777320
	24d2daafe86a0       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   789f00d8ece98       kube-apiserver-old-k8s-version-777320
	
	
	==> containerd <==
	Aug 31 23:13:31 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:13:31.563236969Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Aug 31 23:13:31 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:13:31.564975506Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 31 23:13:31 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:13:31.565207095Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Aug 31 23:14:05 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:14:05.560995768Z" level=info msg="CreateContainer within sandbox \"7d62dde8e02e626042d9c3c4ad31065f17803d4060ee187321991ffb2e6780e4\" for container name:\"dashboard-metrics-scraper\"  attempt:4"
	Aug 31 23:14:05 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:14:05.583053548Z" level=info msg="CreateContainer within sandbox \"7d62dde8e02e626042d9c3c4ad31065f17803d4060ee187321991ffb2e6780e4\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"c861355db08d33725db8674fb1b5c327d86ab02d70cdbe92bfe91b8f02218ecc\""
	Aug 31 23:14:05 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:14:05.584034246Z" level=info msg="StartContainer for \"c861355db08d33725db8674fb1b5c327d86ab02d70cdbe92bfe91b8f02218ecc\""
	Aug 31 23:14:05 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:14:05.654258843Z" level=info msg="StartContainer for \"c861355db08d33725db8674fb1b5c327d86ab02d70cdbe92bfe91b8f02218ecc\" returns successfully"
	Aug 31 23:14:05 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:14:05.681042561Z" level=info msg="shim disconnected" id=c861355db08d33725db8674fb1b5c327d86ab02d70cdbe92bfe91b8f02218ecc namespace=k8s.io
	Aug 31 23:14:05 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:14:05.681108046Z" level=warning msg="cleaning up after shim disconnected" id=c861355db08d33725db8674fb1b5c327d86ab02d70cdbe92bfe91b8f02218ecc namespace=k8s.io
	Aug 31 23:14:05 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:14:05.681118737Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 31 23:14:06 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:14:06.255220328Z" level=info msg="RemoveContainer for \"99aaa3d691aed3c094a3ee12a314c6eb163fd63cb1c632764d107c7a9fc1fef3\""
	Aug 31 23:14:06 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:14:06.261705506Z" level=info msg="RemoveContainer for \"99aaa3d691aed3c094a3ee12a314c6eb163fd63cb1c632764d107c7a9fc1fef3\" returns successfully"
	Aug 31 23:15:02 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:15:02.558633283Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 31 23:15:02 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:15:02.564722294Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Aug 31 23:15:02 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:15:02.566704867Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 31 23:15:02 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:15:02.566782717Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Aug 31 23:15:27 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:15:27.559759417Z" level=info msg="CreateContainer within sandbox \"7d62dde8e02e626042d9c3c4ad31065f17803d4060ee187321991ffb2e6780e4\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Aug 31 23:15:27 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:15:27.579818936Z" level=info msg="CreateContainer within sandbox \"7d62dde8e02e626042d9c3c4ad31065f17803d4060ee187321991ffb2e6780e4\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"c2de4a0a64d9d9a0769c26950e0e4a346b6b730b12124485a252c62fde7e17ca\""
	Aug 31 23:15:27 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:15:27.580715269Z" level=info msg="StartContainer for \"c2de4a0a64d9d9a0769c26950e0e4a346b6b730b12124485a252c62fde7e17ca\""
	Aug 31 23:15:27 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:15:27.652958321Z" level=info msg="StartContainer for \"c2de4a0a64d9d9a0769c26950e0e4a346b6b730b12124485a252c62fde7e17ca\" returns successfully"
	Aug 31 23:15:27 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:15:27.676008155Z" level=info msg="shim disconnected" id=c2de4a0a64d9d9a0769c26950e0e4a346b6b730b12124485a252c62fde7e17ca namespace=k8s.io
	Aug 31 23:15:27 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:15:27.676065312Z" level=warning msg="cleaning up after shim disconnected" id=c2de4a0a64d9d9a0769c26950e0e4a346b6b730b12124485a252c62fde7e17ca namespace=k8s.io
	Aug 31 23:15:27 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:15:27.676076832Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 31 23:15:28 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:15:28.492310823Z" level=info msg="RemoveContainer for \"c861355db08d33725db8674fb1b5c327d86ab02d70cdbe92bfe91b8f02218ecc\""
	Aug 31 23:15:28 old-k8s-version-777320 containerd[570]: time="2024-08-31T23:15:28.497970707Z" level=info msg="RemoveContainer for \"c861355db08d33725db8674fb1b5c327d86ab02d70cdbe92bfe91b8f02218ecc\" returns successfully"
	
	
	==> coredns [26aa1d510c36cb526536c021669ae3f5436cdad4f0a6f9c30b4b361249af6077] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:45805 - 31679 "HINFO IN 2404844539766162594.7152286968403205230. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027017036s
	
	
	==> coredns [83c393cb9b979e9591d3e9004c20ad7a85c3cf5a2fb01002fa02cdd21598c0ee] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:59626 - 24808 "HINFO IN 815135547465917707.4799201704170482130. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014735862s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0831 23:12:30.552542       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-31 23:12:00.551394242 +0000 UTC m=+0.038268959) (total time: 30.001051784s):
	Trace[2019727887]: [30.001051784s] [30.001051784s] END
	E0831 23:12:30.552581       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0831 23:12:30.553676       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-31 23:12:00.55317396 +0000 UTC m=+0.040048759) (total time: 30.000485116s):
	Trace[939984059]: [30.000485116s] [30.000485116s] END
	E0831 23:12:30.553690       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0831 23:12:30.554322       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-31 23:12:00.554011619 +0000 UTC m=+0.040886327) (total time: 30.000297384s):
	Trace[1474941318]: [30.000297384s] [30.000297384s] END
	E0831 23:12:30.554334       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-777320
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-777320
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=old-k8s-version-777320
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T23_09_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 23:09:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-777320
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 23:17:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 23:12:50 +0000   Sat, 31 Aug 2024 23:09:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 23:12:50 +0000   Sat, 31 Aug 2024 23:09:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 23:12:50 +0000   Sat, 31 Aug 2024 23:09:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 23:12:50 +0000   Sat, 31 Aug 2024 23:09:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-777320
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 98b5aab821e246fd90c4c728ddf20943
	  System UUID:                0346832c-168e-4caa-8b7a-9e1aa935440e
	  Boot ID:                    844307fd-f17e-4b74-a327-71aead28c204
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.21
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 coredns-74ff55c5b-9zjlw                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m12s
	  kube-system                 etcd-old-k8s-version-777320                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m19s
	  kube-system                 kindnet-ww8r9                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m12s
	  kube-system                 kube-apiserver-old-k8s-version-777320             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-controller-manager-old-k8s-version-777320    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-proxy-wv4m2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-scheduler-old-k8s-version-777320             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 metrics-server-9975d5f86-dl7gj                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m25s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-klp6x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-m6vqc               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m39s (x5 over 8m39s)  kubelet     Node old-k8s-version-777320 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m39s (x4 over 8m39s)  kubelet     Node old-k8s-version-777320 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m39s (x4 over 8m39s)  kubelet     Node old-k8s-version-777320 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m20s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m20s                  kubelet     Node old-k8s-version-777320 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m20s                  kubelet     Node old-k8s-version-777320 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m20s                  kubelet     Node old-k8s-version-777320 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m12s                  kubelet     Node old-k8s-version-777320 status is now: NodeReady
	  Normal  Starting                 8m11s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m56s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-777320 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-777320 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s (x7 over 5m56s)  kubelet     Node old-k8s-version-777320 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m42s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [36189b23eed4dd70cc8796dc64533ece0f97c8515188d9c4bb6817079cf848fa] <==
	raft2024/08/31 23:09:04 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/08/31 23:09:04 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/08/31 23:09:04 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/08/31 23:09:04 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/08/31 23:09:04 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-08-31 23:09:04.995769 I | etcdserver: setting up the initial cluster version to 3.4
	2024-08-31 23:09:04.996866 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-08-31 23:09:04.997200 I | etcdserver/api: enabled capabilities for version 3.4
	2024-08-31 23:09:04.997407 I | etcdserver: published {Name:old-k8s-version-777320 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-08-31 23:09:04.999528 I | embed: ready to serve client requests
	2024-08-31 23:09:05.005080 I | embed: serving client requests on 192.168.85.2:2379
	2024-08-31 23:09:05.005388 I | embed: ready to serve client requests
	2024-08-31 23:09:05.006842 I | embed: serving client requests on 127.0.0.1:2379
	2024-08-31 23:09:28.136003 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:09:32.138696 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:09:42.139131 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:09:52.138859 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:10:02.138957 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:10:12.138769 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:10:22.138792 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:10:32.138942 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:10:42.141051 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:10:52.138603 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:11:02.138836 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:11:12.138874 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [adc270d3e8398b7a86ff787dfd6fa155a7deeb47c29d94cf0371e7f3af2cf66a] <==
	2024-08-31 23:13:39.693725 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:13:49.693055 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:13:59.693128 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:14:09.693251 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:14:19.693212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:14:29.693185 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:14:39.693094 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:14:49.693143 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:14:59.693826 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:15:09.693270 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:15:19.693080 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:15:29.693115 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:15:39.693573 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:15:49.693151 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:15:59.692941 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:16:09.693323 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:16:19.694153 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:16:29.694322 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:16:39.693739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:16:49.693219 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:16:59.694942 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:17:09.693739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:17:19.693065 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:17:29.693217 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-31 23:17:39.693834 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 23:17:42 up  7:00,  0 users,  load average: 2.00, 2.08, 2.51
	Linux old-k8s-version-777320 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [0ddaf9925b5b8a2498baf923aedda0b04394c82cc0c7c18555b7233ea40ba24c] <==
	I0831 23:09:33.908912       1 controller.go:338] Waiting for informer caches to sync
	I0831 23:09:33.908919       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0831 23:09:34.008983       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0831 23:09:34.009016       1 metrics.go:61] Registering metrics
	I0831 23:09:34.009067       1 controller.go:374] Syncing nftables rules
	I0831 23:09:43.826036       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:09:43.826131       1 main.go:299] handling current node
	I0831 23:09:53.826106       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:09:53.826147       1 main.go:299] handling current node
	I0831 23:10:03.834728       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:10:03.834765       1 main.go:299] handling current node
	I0831 23:10:13.833777       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:10:13.833813       1 main.go:299] handling current node
	I0831 23:10:23.825734       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:10:23.825764       1 main.go:299] handling current node
	I0831 23:10:33.825869       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:10:33.825905       1 main.go:299] handling current node
	I0831 23:10:43.828705       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:10:43.828743       1 main.go:299] handling current node
	I0831 23:10:53.826698       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:10:53.826739       1 main.go:299] handling current node
	I0831 23:11:03.834529       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:11:03.834566       1 main.go:299] handling current node
	I0831 23:11:13.825846       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:11:13.825884       1 main.go:299] handling current node
	
	
	==> kindnet [e7bba7657fd955c1b7ffaed6f8954f4add6c68cbbebb5450b878b68fecc3dfd4] <==
	I0831 23:15:41.012151       1 main.go:299] handling current node
	I0831 23:15:51.016971       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:15:51.017011       1 main.go:299] handling current node
	I0831 23:16:01.009417       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:16:01.009464       1 main.go:299] handling current node
	I0831 23:16:11.009757       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:16:11.009803       1 main.go:299] handling current node
	I0831 23:16:21.012984       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:16:21.013031       1 main.go:299] handling current node
	I0831 23:16:31.018021       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:16:31.018060       1 main.go:299] handling current node
	I0831 23:16:41.012804       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:16:41.012845       1 main.go:299] handling current node
	I0831 23:16:51.018757       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:16:51.018802       1 main.go:299] handling current node
	I0831 23:17:01.011102       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:17:01.014009       1 main.go:299] handling current node
	I0831 23:17:11.016724       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:17:11.016768       1 main.go:299] handling current node
	I0831 23:17:21.018220       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:17:21.018258       1 main.go:299] handling current node
	I0831 23:17:31.018565       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:17:31.018613       1 main.go:299] handling current node
	I0831 23:17:41.017080       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0831 23:17:41.017118       1 main.go:299] handling current node
	
	
	==> kube-apiserver [24d2daafe86a0dcb6af4171206676787738fec4b49e748c8e217d63f6af8bb12] <==
	I0831 23:09:12.248716       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0831 23:09:12.248745       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0831 23:09:12.273452       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0831 23:09:12.278465       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0831 23:09:12.278490       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0831 23:09:12.845313       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0831 23:09:12.890725       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0831 23:09:12.957063       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0831 23:09:12.958252       1 controller.go:606] quota admission added evaluator for: endpoints
	I0831 23:09:12.962993       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0831 23:09:13.940135       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0831 23:09:14.410624       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0831 23:09:14.457825       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0831 23:09:22.954871       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0831 23:09:30.441890       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0831 23:09:30.523377       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0831 23:09:45.943686       1 client.go:360] parsed scheme: "passthrough"
	I0831 23:09:45.943732       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0831 23:09:45.943902       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0831 23:10:21.408807       1 client.go:360] parsed scheme: "passthrough"
	I0831 23:10:21.408855       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0831 23:10:21.408865       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0831 23:11:05.873851       1 client.go:360] parsed scheme: "passthrough"
	I0831 23:11:05.873951       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0831 23:11:05.873980       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [2b314ddb4c1637e9a96260a3016921d4c647744eac2a23a86ba97ac80539955e] <==
	I0831 23:14:16.629373       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0831 23:14:16.629381       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0831 23:14:51.248315       1 client.go:360] parsed scheme: "passthrough"
	I0831 23:14:51.248549       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0831 23:14:51.248701       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0831 23:15:01.403862       1 handler_proxy.go:102] no RequestInfo found in the context
	E0831 23:15:01.404119       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0831 23:15:01.404142       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0831 23:15:35.816519       1 client.go:360] parsed scheme: "passthrough"
	I0831 23:15:35.816563       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0831 23:15:35.816572       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0831 23:16:09.495071       1 client.go:360] parsed scheme: "passthrough"
	I0831 23:16:09.495127       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0831 23:16:09.495139       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0831 23:16:49.202937       1 client.go:360] parsed scheme: "passthrough"
	I0831 23:16:49.202981       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0831 23:16:49.202989       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0831 23:16:59.378519       1 handler_proxy.go:102] no RequestInfo found in the context
	E0831 23:16:59.378616       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0831 23:16:59.378625       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0831 23:17:29.082995       1 client.go:360] parsed scheme: "passthrough"
	I0831 23:17:29.083045       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0831 23:17:29.083057       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [2ba9272459fbd3d5920c42ffd67f3fc7be523ddb8abd1b3e2f8db38f6db5a2bd] <==
	E0831 23:13:19.289976       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0831 23:13:22.894977       1 request.go:655] Throttling request took 1.048375539s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0831 23:13:23.746609       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0831 23:13:49.791790       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0831 23:13:55.397266       1 request.go:655] Throttling request took 1.048420785s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v1?timeout=32s
	W0831 23:13:56.248790       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0831 23:14:20.293642       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0831 23:14:27.899228       1 request.go:655] Throttling request took 1.047225051s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0831 23:14:28.750641       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0831 23:14:50.795356       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0831 23:15:00.404285       1 request.go:655] Throttling request took 1.051332704s, request: GET:https://192.168.85.2:8443/apis/events.k8s.io/v1beta1?timeout=32s
	W0831 23:15:01.253009       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0831 23:15:21.298535       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0831 23:15:32.903339       1 request.go:655] Throttling request took 1.04848932s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
	W0831 23:15:33.758758       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0831 23:15:51.800264       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0831 23:16:05.409313       1 request.go:655] Throttling request took 1.048302746s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0831 23:16:06.268959       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0831 23:16:22.302025       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0831 23:16:37.919727       1 request.go:655] Throttling request took 1.048629981s, request: GET:https://192.168.85.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0831 23:16:38.771076       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0831 23:16:52.803764       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0831 23:17:10.421460       1 request.go:655] Throttling request took 1.048264197s, request: GET:https://192.168.85.2:8443/apis/events.k8s.io/v1beta1?timeout=32s
	W0831 23:17:11.273026       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0831 23:17:23.305934       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-controller-manager [4a53de8c7cfef15e1b4d2eb2d08e3992e87a5d45cb56ec1579cce90b650a86a3] <==
	I0831 23:09:30.456747       1 disruption.go:339] Sending events to api server.
	I0831 23:09:30.496798       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ww8r9"
	I0831 23:09:30.520170       1 shared_informer.go:247] Caches are synced for deployment 
	I0831 23:09:30.526917       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wv4m2"
	I0831 23:09:30.539948       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0831 23:09:30.547069       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0831 23:09:30.560208       1 shared_informer.go:247] Caches are synced for endpoint 
	I0831 23:09:30.567495       1 shared_informer.go:247] Caches are synced for resource quota 
	I0831 23:09:30.577507       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0831 23:09:30.592071       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0831 23:09:30.609375       1 shared_informer.go:247] Caches are synced for resource quota 
	I0831 23:09:30.661608       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-frm8z"
	E0831 23:09:30.718827       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"418c55ad-e911-47e8-b152-b9f3bb93a15e", ResourceVersion:"258", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63860742554, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000e51c00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000e51c20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4000e51c40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001043780), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000e51
c60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000e51c80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000e51cc0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000e41b60), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d9f9d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40002d4af0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400098f3f8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d9fa48)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0831 23:09:30.720311       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-9zjlw"
	I0831 23:09:30.757287       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E0831 23:09:30.806339       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"8d85d4f3-5120-4a15-9d28-11a494458e42", ResourceVersion:"385", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63860742555, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240813-c6f155d6\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001c4ad20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001c4ad40)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001c4ad60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001c4ad80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001c4ada0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generatio
n:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001c4adc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:
(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001c4ade0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlo
ckStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CS
I:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001c4ae00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Q
uobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240813-c6f155d6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001c4ae20)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001c4ae60)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i
:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", Sub
Path:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001c74000), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001c32dd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004cfea0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinit
y:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400011be50)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001c32e20)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v
1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0831 23:09:30.986664       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0831 23:09:30.986695       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0831 23:09:31.020215       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0831 23:09:32.030493       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0831 23:09:32.076462       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-frm8z"
	I0831 23:09:35.387190       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0831 23:11:16.425226       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0831 23:11:16.475093       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0831 23:11:16.497326       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	
	
	==> kube-proxy [375dfe16bc1a3f88bb1adae552829f1de1f6ccd7a3b2bf3eb9e823c85daa3329] <==
	I0831 23:09:31.633497       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0831 23:09:31.633592       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0831 23:09:31.658403       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0831 23:09:31.658498       1 server_others.go:185] Using iptables Proxier.
	I0831 23:09:31.658712       1 server.go:650] Version: v1.20.0
	I0831 23:09:31.659982       1 config.go:315] Starting service config controller
	I0831 23:09:31.659991       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0831 23:09:31.660080       1 config.go:224] Starting endpoint slice config controller
	I0831 23:09:31.660112       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0831 23:09:31.760169       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0831 23:09:31.760319       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [a630dbfa8aa905e5ea3326c49649056da78bba5cfd2beda22ff0f2f93515a197] <==
	I0831 23:12:00.771629       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0831 23:12:00.771807       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0831 23:12:00.799728       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0831 23:12:00.799816       1 server_others.go:185] Using iptables Proxier.
	I0831 23:12:00.800019       1 server.go:650] Version: v1.20.0
	I0831 23:12:00.800848       1 config.go:315] Starting service config controller
	I0831 23:12:00.800866       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0831 23:12:00.800897       1 config.go:224] Starting endpoint slice config controller
	I0831 23:12:00.800900       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0831 23:12:00.900992       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0831 23:12:00.901034       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [cb403497717362a835119a12fcd9a98f048e1513652343987c5706732ded954d] <==
	W0831 23:09:11.422214       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0831 23:09:11.422225       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0831 23:09:11.518128       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0831 23:09:11.525545       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0831 23:09:11.525602       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0831 23:09:11.525649       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0831 23:09:11.530581       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 23:09:11.530940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0831 23:09:11.532025       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0831 23:09:11.532132       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0831 23:09:11.532452       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0831 23:09:11.532489       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 23:09:11.555812       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 23:09:11.555897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 23:09:11.556080       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 23:09:11.556136       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0831 23:09:11.556186       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0831 23:09:11.556245       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0831 23:09:12.427208       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 23:09:12.445282       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 23:09:12.474733       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 23:09:12.512419       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0831 23:09:12.530963       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 23:09:12.696933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0831 23:09:15.025870       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [f3564f974c7c186a4cf3110fbdf83d7607a54cbd7d58484326748df57213e666] <==
	I0831 23:11:53.869258       1 serving.go:331] Generated self-signed cert in-memory
	W0831 23:11:58.214971       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0831 23:11:58.215015       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0831 23:11:58.215029       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0831 23:11:58.215035       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0831 23:11:58.341396       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0831 23:11:58.341544       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0831 23:11:58.342717       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0831 23:11:58.342816       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0831 23:11:58.543839       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Aug 31 23:15:52 old-k8s-version-777320 kubelet[661]: E0831 23:15:52.560966     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 31 23:16:00 old-k8s-version-777320 kubelet[661]: I0831 23:16:00.557648     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c2de4a0a64d9d9a0769c26950e0e4a346b6b730b12124485a252c62fde7e17ca
	Aug 31 23:16:00 old-k8s-version-777320 kubelet[661]: E0831 23:16:00.559529     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	Aug 31 23:16:04 old-k8s-version-777320 kubelet[661]: E0831 23:16:04.557871     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 31 23:16:12 old-k8s-version-777320 kubelet[661]: I0831 23:16:12.557285     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c2de4a0a64d9d9a0769c26950e0e4a346b6b730b12124485a252c62fde7e17ca
	Aug 31 23:16:12 old-k8s-version-777320 kubelet[661]: E0831 23:16:12.558116     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	Aug 31 23:16:17 old-k8s-version-777320 kubelet[661]: E0831 23:16:17.558685     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 31 23:16:26 old-k8s-version-777320 kubelet[661]: I0831 23:16:26.557579     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c2de4a0a64d9d9a0769c26950e0e4a346b6b730b12124485a252c62fde7e17ca
	Aug 31 23:16:26 old-k8s-version-777320 kubelet[661]: E0831 23:16:26.559804     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	Aug 31 23:16:30 old-k8s-version-777320 kubelet[661]: E0831 23:16:30.558289     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 31 23:16:37 old-k8s-version-777320 kubelet[661]: I0831 23:16:37.557115     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c2de4a0a64d9d9a0769c26950e0e4a346b6b730b12124485a252c62fde7e17ca
	Aug 31 23:16:37 old-k8s-version-777320 kubelet[661]: E0831 23:16:37.557945     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	Aug 31 23:16:45 old-k8s-version-777320 kubelet[661]: E0831 23:16:45.557768     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 31 23:16:50 old-k8s-version-777320 kubelet[661]: I0831 23:16:50.557309     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c2de4a0a64d9d9a0769c26950e0e4a346b6b730b12124485a252c62fde7e17ca
	Aug 31 23:16:50 old-k8s-version-777320 kubelet[661]: E0831 23:16:50.558846     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	Aug 31 23:16:58 old-k8s-version-777320 kubelet[661]: E0831 23:16:58.561819     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 31 23:17:02 old-k8s-version-777320 kubelet[661]: I0831 23:17:02.557749     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c2de4a0a64d9d9a0769c26950e0e4a346b6b730b12124485a252c62fde7e17ca
	Aug 31 23:17:02 old-k8s-version-777320 kubelet[661]: E0831 23:17:02.558100     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	Aug 31 23:17:09 old-k8s-version-777320 kubelet[661]: E0831 23:17:09.557874     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 31 23:17:17 old-k8s-version-777320 kubelet[661]: I0831 23:17:17.557132     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c2de4a0a64d9d9a0769c26950e0e4a346b6b730b12124485a252c62fde7e17ca
	Aug 31 23:17:17 old-k8s-version-777320 kubelet[661]: E0831 23:17:17.557481     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	Aug 31 23:17:24 old-k8s-version-777320 kubelet[661]: E0831 23:17:24.557898     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 31 23:17:32 old-k8s-version-777320 kubelet[661]: I0831 23:17:32.557112     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c2de4a0a64d9d9a0769c26950e0e4a346b6b730b12124485a252c62fde7e17ca
	Aug 31 23:17:32 old-k8s-version-777320 kubelet[661]: E0831 23:17:32.557441     661 pod_workers.go:191] Error syncing pod 602e3f57-a665-4345-99f4-ac5f270847b4 ("dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-klp6x_kubernetes-dashboard(602e3f57-a665-4345-99f4-ac5f270847b4)"
	Aug 31 23:17:38 old-k8s-version-777320 kubelet[661]: E0831 23:17:38.563544     661 pod_workers.go:191] Error syncing pod b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01 ("metrics-server-9975d5f86-dl7gj_kube-system(b9a02b0a-ad39-43ff-a1a3-c0caaab7bf01)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [3a128b2ebef5226632792617aecdd7d9fa214ff983a449971d7ccdfab3a99f21] <==
	2024/08/31 23:12:22 Using namespace: kubernetes-dashboard
	2024/08/31 23:12:22 Using in-cluster config to connect to apiserver
	2024/08/31 23:12:22 Using secret token for csrf signing
	2024/08/31 23:12:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/31 23:12:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/31 23:12:22 Successful initial request to the apiserver, version: v1.20.0
	2024/08/31 23:12:22 Generating JWE encryption key
	2024/08/31 23:12:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/31 23:12:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/31 23:12:23 Initializing JWE encryption key from synchronized object
	2024/08/31 23:12:23 Creating in-cluster Sidecar client
	2024/08/31 23:12:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/31 23:12:23 Serving insecurely on HTTP port: 9090
	2024/08/31 23:12:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/31 23:13:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/31 23:13:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/31 23:14:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/31 23:14:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/31 23:15:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/31 23:15:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/31 23:16:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/31 23:16:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/31 23:17:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/31 23:12:22 Starting overwatch
	
	
	==> storage-provisioner [a7e7bee1e72395c1a5b201c2a16ae1e8c0725a75e13db42a8afd0bd7b61f1a6b] <==
	I0831 23:12:00.661350       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0831 23:12:30.670251       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f7e4d956562c970c31e64420c5803c72175a04dd6fcef52066a3ece0a6233f9f] <==
	I0831 23:12:44.785999       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 23:12:44.815967       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 23:12:44.816350       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 23:13:02.337735       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 23:13:02.338138       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-777320_d0d3b22c-a7be-47e9-8c57-72e3629bccb2!
	I0831 23:13:02.338838       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc8418f5-8aa8-4a46-a658-b7beb7fab7b8", APIVersion:"v1", ResourceVersion:"846", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-777320_d0d3b22c-a7be-47e9-8c57-72e3629bccb2 became leader
	I0831 23:13:02.438379       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-777320_d0d3b22c-a7be-47e9-8c57-72e3629bccb2!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-777320 -n old-k8s-version-777320
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-777320 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: metrics-server-9975d5f86-dl7gj
helpers_test.go:275: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context old-k8s-version-777320 describe pod metrics-server-9975d5f86-dl7gj
helpers_test.go:278: (dbg) Non-zero exit: kubectl --context old-k8s-version-777320 describe pod metrics-server-9975d5f86-dl7gj: exit status 1 (131.497391ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-dl7gj" not found

                                                
                                                
** /stderr **
helpers_test.go:280: kubectl --context old-k8s-version-777320 describe pod metrics-server-9975d5f86-dl7gj: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (373.41s)

                                                
                                    

Test pass (308/338)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.39
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 5.49
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.22
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 217.89
31 TestAddons/serial/GCPAuth/Namespaces 0.19
33 TestAddons/parallel/Registry 17.36
34 TestAddons/parallel/Ingress 19.1
35 TestAddons/parallel/InspektorGadget 10.95
36 TestAddons/parallel/MetricsServer 6.76
39 TestAddons/parallel/CSI 56.72
40 TestAddons/parallel/Headlamp 11.24
41 TestAddons/parallel/CloudSpanner 6.59
42 TestAddons/parallel/LocalPath 53.04
43 TestAddons/parallel/NvidiaDevicePlugin 6.58
44 TestAddons/parallel/Yakd 11.92
45 TestAddons/StoppedEnableDisable 12.33
46 TestCertOptions 36.2
47 TestCertExpiration 227.34
49 TestForceSystemdFlag 41.2
50 TestForceSystemdEnv 45.58
51 TestDockerEnvContainerd 46.78
56 TestErrorSpam/setup 32.13
57 TestErrorSpam/start 0.72
58 TestErrorSpam/status 0.99
59 TestErrorSpam/pause 1.91
60 TestErrorSpam/unpause 1.84
61 TestErrorSpam/stop 1.47
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 51.24
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.05
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.09
73 TestFunctional/serial/CacheCmd/cache/add_local 1.36
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.06
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 44.84
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.67
84 TestFunctional/serial/LogsFileCmd 1.88
85 TestFunctional/serial/InvalidService 5.07
87 TestFunctional/parallel/ConfigCmd 0.44
88 TestFunctional/parallel/DashboardCmd 7.95
89 TestFunctional/parallel/DryRun 0.53
90 TestFunctional/parallel/InternationalLanguage 0.28
91 TestFunctional/parallel/StatusCmd 1.36
95 TestFunctional/parallel/ServiceCmdConnect 8.69
96 TestFunctional/parallel/AddonsCmd 0.17
97 TestFunctional/parallel/PersistentVolumeClaim 26.72
99 TestFunctional/parallel/SSHCmd 0.79
100 TestFunctional/parallel/CpCmd 2
102 TestFunctional/parallel/FileSync 0.36
103 TestFunctional/parallel/CertSync 2.09
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
111 TestFunctional/parallel/License 0.36
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.33
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.27
124 TestFunctional/parallel/ServiceCmd/List 0.51
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
128 TestFunctional/parallel/ProfileCmd/profile_list 0.57
129 TestFunctional/parallel/ServiceCmd/Format 0.47
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
131 TestFunctional/parallel/ServiceCmd/URL 0.53
132 TestFunctional/parallel/MountCmd/any-port 8.54
133 TestFunctional/parallel/MountCmd/specific-port 1.35
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.01
135 TestFunctional/parallel/Version/short 0.08
136 TestFunctional/parallel/Version/components 1.24
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.48
142 TestFunctional/parallel/ImageCommands/Setup 0.8
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.45
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.39
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.68
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 116.42
160 TestMultiControlPlane/serial/DeployApp 30.98
161 TestMultiControlPlane/serial/PingHostFromPods 1.93
162 TestMultiControlPlane/serial/AddWorkerNode 25.75
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.74
165 TestMultiControlPlane/serial/CopyFile 18.88
166 TestMultiControlPlane/serial/StopSecondaryNode 12.8
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.79
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.73
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 138.06
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.52
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.54
173 TestMultiControlPlane/serial/StopCluster 36.38
174 TestMultiControlPlane/serial/RestartCluster 51.39
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.56
176 TestMultiControlPlane/serial/AddSecondaryNode 39.06
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.78
181 TestJSONOutput/start/Command 55.43
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 1.12
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.67
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 1.27
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
206 TestKicCustomNetwork/create_custom_network 43.32
207 TestKicCustomNetwork/use_default_bridge_network 33.95
208 TestKicExistingNetwork 35.6
209 TestKicCustomSubnet 30.82
210 TestKicStaticIP 32.75
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 65.83
215 TestMountStart/serial/StartWithMountFirst 6
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 6.18
218 TestMountStart/serial/VerifyMountSecond 0.25
219 TestMountStart/serial/DeleteFirst 1.6
220 TestMountStart/serial/VerifyMountPostDelete 0.25
221 TestMountStart/serial/Stop 1.19
222 TestMountStart/serial/RestartStopped 7.95
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestContainerIPsMultiNetwork/serial/CreateExtnet 0.07
227 TestContainerIPsMultiNetwork/serial/FreshStart 59.49
228 TestContainerIPsMultiNetwork/serial/ConnectExtnet 0.1
229 TestContainerIPsMultiNetwork/serial/Stop 5.93
230 TestContainerIPsMultiNetwork/serial/VerifyStatus 0.07
231 TestContainerIPsMultiNetwork/serial/Start 26.26
232 TestContainerIPsMultiNetwork/serial/VerifyNetworks 0.02
233 TestContainerIPsMultiNetwork/serial/Delete 2.43
234 TestContainerIPsMultiNetwork/serial/DeleteExtnet 0.1
235 TestContainerIPsMultiNetwork/serial/VerifyDeletedResources 0.12
238 TestMultiNode/serial/FreshStart2Nodes 64.38
239 TestMultiNode/serial/DeployApp2Nodes 16.93
240 TestMultiNode/serial/PingHostFrom2Pods 0.97
241 TestMultiNode/serial/AddNode 16.33
242 TestMultiNode/serial/MultiNodeLabels 0.09
243 TestMultiNode/serial/ProfileList 0.31
244 TestMultiNode/serial/CopyFile 9.93
245 TestMultiNode/serial/StopNode 2.26
246 TestMultiNode/serial/StartAfterStop 9.44
247 TestMultiNode/serial/RestartKeepsNodes 98.11
248 TestMultiNode/serial/DeleteNode 5.56
249 TestMultiNode/serial/StopMultiNode 23.93
250 TestMultiNode/serial/RestartMultiNode 49.21
251 TestMultiNode/serial/ValidateNameConflict 33.41
256 TestPreload 110.26
258 TestScheduledStopUnix 107.19
261 TestInsufficientStorage 10.24
262 TestRunningBinaryUpgrade 72.47
264 TestKubernetesUpgrade 344.52
265 TestMissingContainerUpgrade 182.1
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
268 TestNoKubernetes/serial/StartWithK8s 39.31
269 TestNoKubernetes/serial/StartWithStopK8s 21.39
270 TestNoKubernetes/serial/Start 6.59
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
272 TestNoKubernetes/serial/ProfileList 0.9
273 TestNoKubernetes/serial/Stop 1.19
274 TestNoKubernetes/serial/StartNoArgs 6.58
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
276 TestStoppedBinaryUpgrade/Setup 0.7
277 TestStoppedBinaryUpgrade/Upgrade 106.78
278 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
287 TestPause/serial/Start 60.06
288 TestPause/serial/SecondStartNoReconfiguration 7.24
289 TestPause/serial/Pause 0.8
290 TestPause/serial/VerifyStatus 0.36
291 TestPause/serial/Unpause 0.85
292 TestPause/serial/PauseAgain 0.97
293 TestPause/serial/DeletePaused 2.87
294 TestPause/serial/VerifyDeletedResources 0.52
302 TestNetworkPlugins/group/false 4.92
307 TestStartStop/group/old-k8s-version/serial/FirstStart 156.27
309 TestStartStop/group/no-preload/serial/FirstStart 76.49
310 TestStartStop/group/old-k8s-version/serial/DeployApp 9.87
311 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.37
312 TestStartStop/group/old-k8s-version/serial/Stop 13.55
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
315 TestStartStop/group/no-preload/serial/DeployApp 9.44
316 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
317 TestStartStop/group/no-preload/serial/Stop 12.09
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
319 TestStartStop/group/no-preload/serial/SecondStart 267.68
320 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
323 TestStartStop/group/no-preload/serial/Pause 3.13
325 TestStartStop/group/embed-certs/serial/FirstStart 55.01
326 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
327 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
328 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.37
329 TestStartStop/group/old-k8s-version/serial/Pause 4.23
331 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.81
332 TestStartStop/group/embed-certs/serial/DeployApp 10.5
333 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.24
334 TestStartStop/group/embed-certs/serial/Stop 12.23
335 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
336 TestStartStop/group/embed-certs/serial/SecondStart 289.38
337 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.51
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.6
339 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.25
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
341 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 265.87
342 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
343 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
344 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
345 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
346 TestStartStop/group/embed-certs/serial/Pause 3.19
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.16
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.12
351 TestStartStop/group/newest-cni/serial/FirstStart 41.46
352 TestNetworkPlugins/group/auto/Start 56.25
353 TestStartStop/group/newest-cni/serial/DeployApp 0
354 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.11
355 TestStartStop/group/newest-cni/serial/Stop 1.3
356 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
357 TestStartStop/group/newest-cni/serial/SecondStart 16.32
358 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
359 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
360 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
361 TestStartStop/group/newest-cni/serial/Pause 3.72
362 TestNetworkPlugins/group/auto/KubeletFlags 0.61
363 TestNetworkPlugins/group/auto/NetCatPod 12.5
364 TestNetworkPlugins/group/kindnet/Start 61.54
365 TestNetworkPlugins/group/auto/DNS 0.21
366 TestNetworkPlugins/group/auto/Localhost 0.18
367 TestNetworkPlugins/group/auto/HairPin 0.19
368 TestNetworkPlugins/group/calico/Start 60.63
369 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
370 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
371 TestNetworkPlugins/group/kindnet/NetCatPod 10.35
372 TestNetworkPlugins/group/kindnet/DNS 0.2
373 TestNetworkPlugins/group/kindnet/Localhost 0.16
374 TestNetworkPlugins/group/kindnet/HairPin 0.24
375 TestNetworkPlugins/group/calico/ControllerPod 6.01
376 TestNetworkPlugins/group/custom-flannel/Start 55.04
377 TestNetworkPlugins/group/calico/KubeletFlags 0.4
378 TestNetworkPlugins/group/calico/NetCatPod 15.44
379 TestNetworkPlugins/group/calico/DNS 0.21
380 TestNetworkPlugins/group/calico/Localhost 0.22
381 TestNetworkPlugins/group/calico/HairPin 0.18
382 TestNetworkPlugins/group/enable-default-cni/Start 71
383 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.42
384 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.36
385 TestNetworkPlugins/group/custom-flannel/DNS 0.25
386 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
387 TestNetworkPlugins/group/custom-flannel/HairPin 0.28
388 TestNetworkPlugins/group/flannel/Start 53
389 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
390 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.4
391 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
392 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
393 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
394 TestNetworkPlugins/group/flannel/ControllerPod 6.01
395 TestNetworkPlugins/group/bridge/Start 49.12
396 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
397 TestNetworkPlugins/group/flannel/NetCatPod 11.34
398 TestNetworkPlugins/group/flannel/DNS 0.27
399 TestNetworkPlugins/group/flannel/Localhost 0.25
400 TestNetworkPlugins/group/flannel/HairPin 0.18
401 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
402 TestNetworkPlugins/group/bridge/NetCatPod 10.31
403 TestNetworkPlugins/group/bridge/DNS 0.16
404 TestNetworkPlugins/group/bridge/Localhost 0.14
405 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (9.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-628848 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-628848 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.385622729s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-628848
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-628848: exit status 85 (81.962014ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-628848 | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC |          |
	|         | -p download-only-628848        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:20:28
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:20:28.589233 1166790 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:20:28.589354 1166790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:20:28.589363 1166790 out.go:358] Setting ErrFile to fd 2...
	I0831 22:20:28.589368 1166790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:20:28.589623 1166790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
	W0831 22:20:28.589753 1166790 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18943-1161402/.minikube/config/config.json: open /home/jenkins/minikube-integration/18943-1161402/.minikube/config/config.json: no such file or directory
	I0831 22:20:28.590195 1166790 out.go:352] Setting JSON to true
	I0831 22:20:28.591048 1166790 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21777,"bootTime":1725121051,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0831 22:20:28.591117 1166790 start.go:139] virtualization:  
	I0831 22:20:28.594293 1166790 out.go:97] [download-only-628848] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0831 22:20:28.594506 1166790 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/preloaded-tarball: no such file or directory
	I0831 22:20:28.594546 1166790 notify.go:220] Checking for updates...
	I0831 22:20:28.596230 1166790 out.go:169] MINIKUBE_LOCATION=18943
	I0831 22:20:28.598852 1166790 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:20:28.601462 1166790 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	I0831 22:20:28.603787 1166790 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	I0831 22:20:28.605656 1166790 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0831 22:20:28.609757 1166790 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0831 22:20:28.610047 1166790 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:20:28.632085 1166790 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:20:28.632184 1166790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:20:28.692600 1166790 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-31 22:20:28.683112362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:20:28.692753 1166790 docker.go:307] overlay module found
	I0831 22:20:28.694981 1166790 out.go:97] Using the docker driver based on user configuration
	I0831 22:20:28.695005 1166790 start.go:297] selected driver: docker
	I0831 22:20:28.695012 1166790 start.go:901] validating driver "docker" against <nil>
	I0831 22:20:28.695135 1166790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:20:28.749350 1166790 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-31 22:20:28.74028151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:20:28.749536 1166790 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:20:28.749850 1166790 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0831 22:20:28.750007 1166790 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 22:20:28.752581 1166790 out.go:169] Using Docker driver with root privileges
	I0831 22:20:28.754471 1166790 cni.go:84] Creating CNI manager for ""
	I0831 22:20:28.754502 1166790 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0831 22:20:28.754516 1166790 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0831 22:20:28.754605 1166790 start.go:340] cluster config:
	{Name:download-only-628848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-628848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:20:28.756904 1166790 out.go:97] Starting "download-only-628848" primary control-plane node in "download-only-628848" cluster
	I0831 22:20:28.756935 1166790 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0831 22:20:28.759285 1166790 out.go:97] Pulling base image v0.0.44-1724862063-19530 ...
	I0831 22:20:28.759314 1166790 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0831 22:20:28.759416 1166790 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 22:20:28.774656 1166790 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:20:28.774850 1166790 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 22:20:28.774948 1166790 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:20:28.821116 1166790 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0831 22:20:28.821143 1166790 cache.go:56] Caching tarball of preloaded images
	I0831 22:20:28.821906 1166790 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0831 22:20:28.824741 1166790 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0831 22:20:28.824776 1166790 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0831 22:20:28.908337 1166790 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-628848 host does not exist
	  To start a cluster, run: "minikube start -p download-only-628848"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-628848
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (5.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-610624 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-610624 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.484821197s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (5.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-610624
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-610624: exit status 85 (76.92926ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-628848 | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC |                     |
	|         | -p download-only-628848        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC | 31 Aug 24 22:20 UTC |
	| delete  | -p download-only-628848        | download-only-628848 | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC | 31 Aug 24 22:20 UTC |
	| start   | -o=json --download-only        | download-only-610624 | jenkins | v1.33.1 | 31 Aug 24 22:20 UTC |                     |
	|         | -p download-only-610624        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:20:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:20:38.414382 1166992 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:20:38.414633 1166992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:20:38.414660 1166992 out.go:358] Setting ErrFile to fd 2...
	I0831 22:20:38.414679 1166992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:20:38.414945 1166992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
	I0831 22:20:38.415406 1166992 out.go:352] Setting JSON to true
	I0831 22:20:38.416299 1166992 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21787,"bootTime":1725121051,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0831 22:20:38.416395 1166992 start.go:139] virtualization:  
	I0831 22:20:38.419155 1166992 out.go:97] [download-only-610624] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 22:20:38.419386 1166992 notify.go:220] Checking for updates...
	I0831 22:20:38.421052 1166992 out.go:169] MINIKUBE_LOCATION=18943
	I0831 22:20:38.423595 1166992 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:20:38.425426 1166992 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	I0831 22:20:38.427231 1166992 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	I0831 22:20:38.428860 1166992 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0831 22:20:38.432283 1166992 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0831 22:20:38.432541 1166992 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:20:38.462276 1166992 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:20:38.462413 1166992 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:20:38.534090 1166992 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-31 22:20:38.524354706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:20:38.534216 1166992 docker.go:307] overlay module found
	I0831 22:20:38.536261 1166992 out.go:97] Using the docker driver based on user configuration
	I0831 22:20:38.536300 1166992 start.go:297] selected driver: docker
	I0831 22:20:38.536307 1166992 start.go:901] validating driver "docker" against <nil>
	I0831 22:20:38.536417 1166992 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:20:38.595307 1166992 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-31 22:20:38.585051833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:20:38.595479 1166992 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:20:38.595794 1166992 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0831 22:20:38.595949 1166992 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 22:20:38.597941 1166992 out.go:169] Using Docker driver with root privileges
	I0831 22:20:38.599577 1166992 cni.go:84] Creating CNI manager for ""
	I0831 22:20:38.599603 1166992 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0831 22:20:38.599615 1166992 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0831 22:20:38.599707 1166992 start.go:340] cluster config:
	{Name:download-only-610624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-610624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:20:38.601582 1166992 out.go:97] Starting "download-only-610624" primary control-plane node in "download-only-610624" cluster
	I0831 22:20:38.601612 1166992 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0831 22:20:38.603575 1166992 out.go:97] Pulling base image v0.0.44-1724862063-19530 ...
	I0831 22:20:38.603608 1166992 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0831 22:20:38.603798 1166992 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 22:20:38.618839 1166992 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:20:38.618951 1166992 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 22:20:38.618975 1166992 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 22:20:38.618983 1166992 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 22:20:38.618991 1166992 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 22:20:38.659763 1166992 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0831 22:20:38.659792 1166992 cache.go:56] Caching tarball of preloaded images
	I0831 22:20:38.659977 1166992 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0831 22:20:38.661981 1166992 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0831 22:20:38.662009 1166992 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0831 22:20:38.748367 1166992 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0831 22:20:42.299795 1166992 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0831 22:20:42.299919 1166992 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0831 22:20:43.162092 1166992 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0831 22:20:43.162451 1166992 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/download-only-610624/config.json ...
	I0831 22:20:43.162487 1166992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/download-only-610624/config.json: {Name:mkf28c3485c1460b77f8e55b7aac19a2aaa4818c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:20:43.162694 1166992 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0831 22:20:43.162860 1166992 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18943-1161402/.minikube/cache/linux/arm64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-610624 host does not exist
	  To start a cluster, run: "minikube start -p download-only-610624"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-610624
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-741122 --alsologtostderr --binary-mirror http://127.0.0.1:34335 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-741122" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-741122
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-516593
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-516593: exit status 85 (70.00509ms)

                                                
                                                
-- stdout --
	* Profile "addons-516593" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-516593"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-516593
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-516593: exit status 85 (69.684969ms)

                                                
                                                
-- stdout --
	* Profile "addons-516593" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-516593"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (217.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-516593 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-516593 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m37.889825069s)
--- PASS: TestAddons/Setup (217.89s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-516593 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-516593 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.009422ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:345: "registry-6fb4cdfc84-wvmsl" [040a9b4e-0596-41c6-9739-2a1d51dfac80] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.011991952s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:345: "registry-proxy-z5ckz" [5f1deaf5-fb65-4500-bb12-b3e76411722b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004056767s
addons_test.go:342: (dbg) Run:  kubectl --context addons-516593 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-516593 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-516593 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.263872307s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-516593 ip
2024/08/31 22:28:19 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-516593 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.36s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-516593 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-516593 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-516593 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:345: "nginx" [9a329e06-816c-4c2c-974d-fa9d463bdbc7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "nginx" [9a329e06-816c-4c2c-974d-fa9d463bdbc7] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004333387s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-516593 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-516593 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-516593 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-516593 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-516593 addons disable ingress-dns --alsologtostderr -v=1: (1.692012106s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-516593 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-516593 addons disable ingress --alsologtostderr -v=1: (7.770349204s)
--- PASS: TestAddons/parallel/Ingress (19.10s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:345: "gadget-p7vk8" [55624912-c55a-43e2-84cf-47fb046ffe89] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004525483s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-516593
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-516593: (5.947738627s)
--- PASS: TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.683757ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:345: "metrics-server-84c5f94fbc-kmkrh" [04a55105-b90d-4b97-af79-0063fbcb110c] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003342129s
addons_test.go:417: (dbg) Run:  kubectl --context addons-516593 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-516593 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.419214ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-516593 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-516593 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:345: "task-pv-pod" [eb37e6f8-ae50-4024-8e45-25aa3be532cd] Pending
helpers_test.go:345: "task-pv-pod" [eb37e6f8-ae50-4024-8e45-25aa3be532cd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:345: "task-pv-pod" [eb37e6f8-ae50-4024-8e45-25aa3be532cd] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004128173s
addons_test.go:590: (dbg) Run:  kubectl --context addons-516593 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:420: (dbg) Run:  kubectl --context addons-516593 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:420: (dbg) Run:  kubectl --context addons-516593 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-516593 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-516593 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-516593 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-516593 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:345: "task-pv-pod-restore" [f2c4e7b3-20c2-4c81-af09-1f3e9cdb8079] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:345: "task-pv-pod-restore" [f2c4e7b3-20c2-4c81-af09-1f3e9cdb8079] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004320503s
addons_test.go:632: (dbg) Run:  kubectl --context addons-516593 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-516593 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-516593 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-516593 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-516593 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.890274543s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-516593 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.72s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-516593 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:345: "headlamp-57fb76fcdb-cqw8q" [e82c1f1a-aa0a-48cf-a00b-bb8f5ad9d159] Pending
helpers_test.go:345: "headlamp-57fb76fcdb-cqw8q" [e82c1f1a-aa0a-48cf-a00b-bb8f5ad9d159] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:345: "headlamp-57fb76fcdb-cqw8q" [e82c1f1a-aa0a-48cf-a00b-bb8f5ad9d159] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004074913s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-516593 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (11.24s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:345: "cloud-spanner-emulator-769b77f747-5v7l9" [f4ed0f93-1232-4306-bfcd-5b5f265afc87] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003847573s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-516593
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-516593 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-516593 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-516593 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:345: "test-local-path" [05a20570-c9a5-4b6c-9d88-31fce26c1f85] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "test-local-path" [05a20570-c9a5-4b6c-9d88-31fce26c1f85] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:345: "test-local-path" [05a20570-c9a5-4b6c-9d88-31fce26c1f85] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003903295s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-516593 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-516593 ssh "cat /opt/local-path-provisioner/pvc-23ac053d-dac8-4c44-affc-532b08517fe2_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-516593 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-516593 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-516593 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-516593 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.692010908s)
--- PASS: TestAddons/parallel/LocalPath (53.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:345: "nvidia-device-plugin-daemonset-bb285" [df44e7cc-7587-4732-a752-a37f7c187b90] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004555657s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-516593
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:345: "yakd-dashboard-67d98fc6b-d5s6m" [099b90de-1178-46c3-93bd-bfb2aea0672b] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004209079s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-516593 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-516593 addons disable yakd --alsologtostderr -v=1: (5.910151923s)
--- PASS: TestAddons/parallel/Yakd (11.92s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-516593
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-516593: (12.050289957s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-516593
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-516593
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-516593
--- PASS: TestAddons/StoppedEnableDisable (12.33s)

                                                
                                    
x
+
TestCertOptions (36.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-610986 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-610986 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.49048064s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-610986 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-610986 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-610986 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-610986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-610986
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-610986: (1.966353189s)
--- PASS: TestCertOptions (36.20s)

                                                
                                    
x
+
TestCertExpiration (227.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-429828 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0831 23:07:26.827660 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-429828 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (36.920362059s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-429828 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-429828 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.987119506s)
helpers_test.go:176: Cleaning up "cert-expiration-429828" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-429828
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-429828: (2.430831276s)
--- PASS: TestCertExpiration (227.34s)

                                                
                                    
x
+
TestForceSystemdFlag (41.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-256645 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-256645 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.281590515s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-256645 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-256645" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-256645
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-256645: (2.49541917s)
--- PASS: TestForceSystemdFlag (41.20s)

                                                
                                    
x
+
TestForceSystemdEnv (45.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-627131 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-627131 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (42.728149577s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-627131 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-627131" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-627131
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-627131: (2.49537617s)
--- PASS: TestForceSystemdEnv (45.58s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.78s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-972061 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-972061 --driver=docker  --container-runtime=containerd: (31.280152597s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-972061"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4CTabAso3oIX/agent.1185833" SSH_AGENT_PID="1185834" DOCKER_HOST=ssh://docker@127.0.0.1:34254 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4CTabAso3oIX/agent.1185833" SSH_AGENT_PID="1185834" DOCKER_HOST=ssh://docker@127.0.0.1:34254 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4CTabAso3oIX/agent.1185833" SSH_AGENT_PID="1185834" DOCKER_HOST=ssh://docker@127.0.0.1:34254 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.116793405s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4CTabAso3oIX/agent.1185833" SSH_AGENT_PID="1185834" DOCKER_HOST=ssh://docker@127.0.0.1:34254 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-972061" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-972061
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-972061: (1.94858865s)
--- PASS: TestDockerEnvContainerd (46.78s)

                                                
                                    
x
+
TestErrorSpam/setup (32.13s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-046239 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-046239 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-046239 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-046239 --driver=docker  --container-runtime=containerd: (32.126358039s)
--- PASS: TestErrorSpam/setup (32.13s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (0.99s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 status
--- PASS: TestErrorSpam/status (0.99s)

                                                
                                    
x
+
TestErrorSpam/pause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 pause
--- PASS: TestErrorSpam/pause (1.91s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 stop: (1.28786947s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-046239 --log_dir /tmp/nospam-046239 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/18943-1161402/.minikube/files/etc/test/nested/copy/1166785/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.24s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-059694 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-059694 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (51.23555094s)
--- PASS: TestFunctional/serial/StartWithProxy (51.24s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-059694 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-059694 --alsologtostderr -v=8: (6.043669682s)
functional_test.go:663: soft start took 6.046301919s for "functional-059694" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-059694 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-059694 cache add registry.k8s.io/pause:3.1: (1.4356922s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-059694 cache add registry.k8s.io/pause:3.3: (1.428200969s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-059694 cache add registry.k8s.io/pause:latest: (1.227705158s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-059694 /tmp/TestFunctionalserialCacheCmdcacheadd_local2344551084/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 cache add minikube-local-cache-test:functional-059694
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 cache delete minikube-local-cache-test:functional-059694
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-059694
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-059694 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (278.867883ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-059694 cache reload: (1.16063085s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 kubectl -- --context functional-059694 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-059694 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-059694 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-059694 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.844428264s)
functional_test.go:761: restart took 44.844539361s for "functional-059694" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.84s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-059694 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-059694 logs: (1.670108314s)
--- PASS: TestFunctional/serial/LogsCmd (1.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 logs --file /tmp/TestFunctionalserialLogsFileCmd1877919770/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-059694 logs --file /tmp/TestFunctionalserialLogsFileCmd1877919770/001/logs.txt: (1.873663398s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.88s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-059694 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-059694
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-059694: exit status 115 (649.770889ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31738 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-059694 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-059694 delete -f testdata/invalidsvc.yaml: (1.162140202s)
--- PASS: TestFunctional/serial/InvalidService (5.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-059694 config get cpus: exit status 14 (76.393864ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-059694 config get cpus: exit status 14 (68.478315ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-059694 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-059694 --alsologtostderr -v=1] ...
helpers_test.go:509: unable to kill pid 1200235: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-059694 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-059694 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (263.203935ms)

                                                
                                                
-- stdout --
	* [functional-059694] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:34:07.786411 1199929 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:34:07.786617 1199929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:34:07.786631 1199929 out.go:358] Setting ErrFile to fd 2...
	I0831 22:34:07.786638 1199929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:34:07.786918 1199929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
	I0831 22:34:07.787338 1199929 out.go:352] Setting JSON to false
	I0831 22:34:07.788403 1199929 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22597,"bootTime":1725121051,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0831 22:34:07.788478 1199929 start.go:139] virtualization:  
	I0831 22:34:07.790711 1199929 out.go:177] * [functional-059694] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 22:34:07.793050 1199929 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:34:07.793143 1199929 notify.go:220] Checking for updates...
	I0831 22:34:07.796685 1199929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:34:07.798699 1199929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	I0831 22:34:07.800837 1199929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	I0831 22:34:07.802501 1199929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 22:34:07.804026 1199929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:34:07.806240 1199929 config.go:182] Loaded profile config "functional-059694": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0831 22:34:07.806761 1199929 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:34:07.847993 1199929 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:34:07.848251 1199929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:34:07.968793 1199929 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-31 22:34:07.933583329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:34:07.968900 1199929 docker.go:307] overlay module found
	I0831 22:34:07.970607 1199929 out.go:177] * Using the docker driver based on existing profile
	I0831 22:34:07.972338 1199929 start.go:297] selected driver: docker
	I0831 22:34:07.972361 1199929 start.go:901] validating driver "docker" against &{Name:functional-059694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-059694 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:34:07.972471 1199929 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:34:07.974620 1199929 out.go:201] 
	W0831 22:34:07.976290 1199929 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0831 22:34:07.977848 1199929 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-059694 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-059694 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-059694 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (280.508077ms)

                                                
                                                
-- stdout --
	* [functional-059694] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:34:07.527198 1199833 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:34:07.529001 1199833 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:34:07.529392 1199833 out.go:358] Setting ErrFile to fd 2...
	I0831 22:34:07.529449 1199833 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:34:07.529976 1199833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
	I0831 22:34:07.531740 1199833 out.go:352] Setting JSON to false
	I0831 22:34:07.533160 1199833 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22596,"bootTime":1725121051,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0831 22:34:07.533296 1199833 start.go:139] virtualization:  
	I0831 22:34:07.536670 1199833 out.go:177] * [functional-059694] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0831 22:34:07.538731 1199833 notify.go:220] Checking for updates...
	I0831 22:34:07.542359 1199833 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:34:07.546014 1199833 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:34:07.547880 1199833 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	I0831 22:34:07.549673 1199833 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	I0831 22:34:07.551732 1199833 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 22:34:07.553783 1199833 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:34:07.556337 1199833 config.go:182] Loaded profile config "functional-059694": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0831 22:34:07.557096 1199833 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:34:07.613644 1199833 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:34:07.613804 1199833 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:34:07.702045 1199833 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-31 22:34:07.690540142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:34:07.702160 1199833 docker.go:307] overlay module found
	I0831 22:34:07.706403 1199833 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0831 22:34:07.708199 1199833 start.go:297] selected driver: docker
	I0831 22:34:07.708215 1199833 start.go:901] validating driver "docker" against &{Name:functional-059694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-059694 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:34:07.708331 1199833 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:34:07.711270 1199833 out.go:201] 
	W0831 22:34:07.713566 1199833 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0831 22:34:07.715392 1199833 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-059694 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-059694 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:345: "hello-node-connect-65d86f57f4-46ktg" [2eea8fce-6904-4ef5-bcb5-fddd14f693ef] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:345: "hello-node-connect-65d86f57f4-46ktg" [2eea8fce-6904-4ef5-bcb5-fddd14f693ef] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004004889s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31951
functional_test.go:1675: http://192.168.49.2:31951: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-46ktg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31951
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:345: "storage-provisioner" [1663925f-61ba-4a48-bb89-46cf41e49a1e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004412792s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-059694 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-059694 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-059694 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-059694 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:345: "sp-pod" [294a573a-3925-42f4-84ea-ef9e9c5f007d] Pending
helpers_test.go:345: "sp-pod" [294a573a-3925-42f4-84ea-ef9e9c5f007d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:345: "sp-pod" [294a573a-3925-42f4-84ea-ef9e9c5f007d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003566071s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-059694 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-059694 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-059694 delete -f testdata/storage-provisioner/pod.yaml: (1.618249613s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-059694 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:345: "sp-pod" [c3c9bbb9-e2ca-48c4-aa49-af89616552e4] Pending
helpers_test.go:345: "sp-pod" [c3c9bbb9-e2ca-48c4-aa49-af89616552e4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:345: "sp-pod" [c3c9bbb9-e2ca-48c4-aa49-af89616552e4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.005422929s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-059694 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.72s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh -n functional-059694 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 cp functional-059694:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd281773733/001/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh -n functional-059694 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh -n functional-059694 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1166785/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "sudo cat /etc/test/nested/copy/1166785/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1166785.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "sudo cat /etc/ssl/certs/1166785.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1166785.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "sudo cat /usr/share/ca-certificates/1166785.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/11667852.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "sudo cat /etc/ssl/certs/11667852.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/11667852.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "sudo cat /usr/share/ca-certificates/11667852.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-059694 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-059694 ssh "sudo systemctl is-active docker": exit status 1 (324.137818ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-059694 ssh "sudo systemctl is-active crio": exit status 1 (338.911188ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-059694 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-059694 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-059694 tunnel --alsologtostderr] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-059694 tunnel --alsologtostderr] ...
helpers_test.go:509: unable to kill pid 1197515: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-059694 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-059694 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:345: "nginx-svc" [93e9899c-c5d4-4989-be87-693e9f59251f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "nginx-svc" [93e9899c-c5d4-4989-be87-693e9f59251f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003608802s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-059694 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.196.194 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-059694 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-059694 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-059694 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:345: "hello-node-64b4f8f9ff-55cwd" [fcd663dd-8b2b-4b30-8284-e28f5da35646] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:345: "hello-node-64b4f8f9ff-55cwd" [fcd663dd-8b2b-4b30-8284-e28f5da35646] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.00381371s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 service list -o json
functional_test.go:1494: Took "575.625758ms" to run "out/minikube-linux-arm64 -p functional-059694 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31617
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "488.375351ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "81.428562ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "397.679302ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "68.94288ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31617
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-059694 /tmp/TestFunctionalparallelMountCmdany-port3430159751/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725143645766834282" to /tmp/TestFunctionalparallelMountCmdany-port3430159751/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725143645766834282" to /tmp/TestFunctionalparallelMountCmdany-port3430159751/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725143645766834282" to /tmp/TestFunctionalparallelMountCmdany-port3430159751/001/test-1725143645766834282
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-059694 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (435.618488ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 31 22:34 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 31 22:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 31 22:34 test-1725143645766834282
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh cat /mount-9p/test-1725143645766834282
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-059694 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:345: "busybox-mount" [d9345e28-e15e-43dd-b3e9-e9df6e10d030] Pending
helpers_test.go:345: "busybox-mount" [d9345e28-e15e-43dd-b3e9-e9df6e10d030] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:345: "busybox-mount" [d9345e28-e15e-43dd-b3e9-e9df6e10d030] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:345: "busybox-mount" [d9345e28-e15e-43dd-b3e9-e9df6e10d030] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005138408s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-059694 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-059694 /tmp/TestFunctionalparallelMountCmdany-port3430159751/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-059694 /tmp/TestFunctionalparallelMountCmdspecific-port4197885631/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-059694 /tmp/TestFunctionalparallelMountCmdspecific-port4197885631/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-059694 ssh "sudo umount -f /mount-9p": exit status 1 (299.548689ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-059694 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-059694 /tmp/TestFunctionalparallelMountCmdspecific-port4197885631/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-059694 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1973497981/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-059694 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1973497981/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-059694 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1973497981/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "findmnt -T" /mount1
2024/08/31 22:34:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-059694 ssh "findmnt -T" /mount1: exit status 1 (598.255732ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-059694 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-059694 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1973497981/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-059694 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1973497981/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-059694 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1973497981/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-059694 version -o=json --components: (1.243971223s)
--- PASS: TestFunctional/parallel/Version/components (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-059694 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-059694
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-059694
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-059694 image ls --format short --alsologtostderr:
I0831 22:34:24.708364 1203198 out.go:345] Setting OutFile to fd 1 ...
I0831 22:34:24.708554 1203198 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:34:24.708582 1203198 out.go:358] Setting ErrFile to fd 2...
I0831 22:34:24.708602 1203198 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:34:24.709086 1203198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
I0831 22:34:24.710335 1203198 config.go:182] Loaded profile config "functional-059694": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0831 22:34:24.710524 1203198 config.go:182] Loaded profile config "functional-059694": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0831 22:34:24.711096 1203198 cli_runner.go:164] Run: docker container inspect functional-059694 --format={{.State.Status}}
I0831 22:34:24.733420 1203198 ssh_runner.go:195] Run: systemctl --version
I0831 22:34:24.733473 1203198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-059694
I0831 22:34:24.755215 1203198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34264 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/functional-059694/id_rsa Username:docker}
I0831 22:34:24.861215 1203198 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image ls --format table --alsologtostderr
E0831 22:34:25.049734 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-059694 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
| docker.io/library/minikube-local-cache-test | functional-059694  | sha256:2a138a | 991B   |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kicbase/echo-server               | functional-059694  | sha256:ce2d2c | 2.17MB |
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| docker.io/library/nginx                     | alpine             | sha256:70594c | 19.6MB |
| docker.io/library/nginx                     | latest             | sha256:a9dfdb | 67.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-059694 image ls --format table --alsologtostderr:
I0831 22:34:24.998602 1203267 out.go:345] Setting OutFile to fd 1 ...
I0831 22:34:24.998777 1203267 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:34:24.998789 1203267 out.go:358] Setting ErrFile to fd 2...
I0831 22:34:24.998795 1203267 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:34:24.999083 1203267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
I0831 22:34:25.002921 1203267 config.go:182] Loaded profile config "functional-059694": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0831 22:34:25.003052 1203267 config.go:182] Loaded profile config "functional-059694": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0831 22:34:25.003563 1203267 cli_runner.go:164] Run: docker container inspect functional-059694 --format={{.State.Status}}
I0831 22:34:25.053089 1203267 ssh_runner.go:195] Run: systemctl --version
I0831 22:34:25.053147 1203267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-059694
I0831 22:34:25.080713 1203267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34264 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/functional-059694/id_rsa Username:docker}
I0831 22:34:25.176994 1203267 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-059694 image ls --format json --alsologtostderr:
[{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add"],"repoTags":["docker.io/library/nginx:latest"],"size":"67690150"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0
b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:2a138a197f6abfee1db3f505702e19475c70221ace76630dbd6e35a31dd4b2f0","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-059694"],"size":"991"},{"id":"sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["
docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19627164"},{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-059694"],"size":"2173567"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","
repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:fbbbd428abb4d
ae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-059694 image ls --format json --alsologtostderr:
I0831 22:34:24.987093 1203263 out.go:345] Setting OutFile to fd 1 ...
I0831 22:34:24.987279 1203263 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:34:24.987291 1203263 out.go:358] Setting ErrFile to fd 2...
I0831 22:34:24.987297 1203263 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:34:24.987551 1203263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
I0831 22:34:24.988244 1203263 config.go:182] Loaded profile config "functional-059694": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0831 22:34:24.988415 1203263 config.go:182] Loaded profile config "functional-059694": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0831 22:34:24.989057 1203263 cli_runner.go:164] Run: docker container inspect functional-059694 --format={{.State.Status}}
I0831 22:34:25.012805 1203263 ssh_runner.go:195] Run: systemctl --version
I0831 22:34:25.012870 1203263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-059694
I0831 22:34:25.059345 1203263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34264 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/functional-059694/id_rsa Username:docker}
I0831 22:34:25.153292 1203263 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-059694 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-059694
size: "2173567"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
repoTags:
- docker.io/library/nginx:latest
size: "67690150"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:2a138a197f6abfee1db3f505702e19475c70221ace76630dbd6e35a31dd4b2f0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-059694
size: "991"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "19627164"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-059694 image ls --format yaml --alsologtostderr:
I0831 22:34:24.722241 1203199 out.go:345] Setting OutFile to fd 1 ...
I0831 22:34:24.722464 1203199 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:34:24.722474 1203199 out.go:358] Setting ErrFile to fd 2...
I0831 22:34:24.722479 1203199 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:34:24.722735 1203199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
I0831 22:34:24.723402 1203199 config.go:182] Loaded profile config "functional-059694": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0831 22:34:24.723536 1203199 config.go:182] Loaded profile config "functional-059694": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0831 22:34:24.724047 1203199 cli_runner.go:164] Run: docker container inspect functional-059694 --format={{.State.Status}}
I0831 22:34:24.742161 1203199 ssh_runner.go:195] Run: systemctl --version
I0831 22:34:24.742215 1203199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-059694
I0831 22:34:24.770524 1203199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34264 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/functional-059694/id_rsa Username:docker}
I0831 22:34:24.865679 1203199 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-059694 ssh pgrep buildkitd: exit status 1 (284.764234ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image build -t localhost/my-image:functional-059694 testdata/build --alsologtostderr
E0831 22:34:26.331929 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-059694 image build -t localhost/my-image:functional-059694 testdata/build --alsologtostderr: (2.971891355s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-059694 image build -t localhost/my-image:functional-059694 testdata/build --alsologtostderr:
I0831 22:34:25.544738 1203385 out.go:345] Setting OutFile to fd 1 ...
I0831 22:34:25.545440 1203385 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:34:25.545473 1203385 out.go:358] Setting ErrFile to fd 2...
I0831 22:34:25.545492 1203385 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:34:25.545761 1203385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
I0831 22:34:25.546419 1203385 config.go:182] Loaded profile config "functional-059694": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0831 22:34:25.547073 1203385 config.go:182] Loaded profile config "functional-059694": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0831 22:34:25.547601 1203385 cli_runner.go:164] Run: docker container inspect functional-059694 --format={{.State.Status}}
I0831 22:34:25.563506 1203385 ssh_runner.go:195] Run: systemctl --version
I0831 22:34:25.563567 1203385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-059694
I0831 22:34:25.583798 1203385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34264 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/functional-059694/id_rsa Username:docker}
I0831 22:34:25.672851 1203385 build_images.go:161] Building image from path: /tmp/build.3576417556.tar
I0831 22:34:25.672968 1203385 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0831 22:34:25.681475 1203385 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3576417556.tar
I0831 22:34:25.684833 1203385 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3576417556.tar: stat -c "%s %y" /var/lib/minikube/build/build.3576417556.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3576417556.tar': No such file or directory
I0831 22:34:25.684859 1203385 ssh_runner.go:362] scp /tmp/build.3576417556.tar --> /var/lib/minikube/build/build.3576417556.tar (3072 bytes)
I0831 22:34:25.708289 1203385 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3576417556
I0831 22:34:25.717548 1203385 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3576417556 -xf /var/lib/minikube/build/build.3576417556.tar
I0831 22:34:25.726365 1203385 containerd.go:394] Building image: /var/lib/minikube/build/build.3576417556
I0831 22:34:25.726490 1203385 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3576417556 --local dockerfile=/var/lib/minikube/build/build.3576417556 --output type=image,name=localhost/my-image:functional-059694
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:84042d8f6ac86940a9d4ad0c5e1ea65b4d7a502665be6df4ac843a2d789b81c4
#8 exporting manifest sha256:84042d8f6ac86940a9d4ad0c5e1ea65b4d7a502665be6df4ac843a2d789b81c4 0.0s done
#8 exporting config sha256:f669ef88cc760f40257129b8a40973ee583995ad255c0eb1028cd8a532bfce20 0.0s done
#8 naming to localhost/my-image:functional-059694 done
#8 DONE 0.1s
I0831 22:34:28.441799 1203385 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3576417556 --local dockerfile=/var/lib/minikube/build/build.3576417556 --output type=image,name=localhost/my-image:functional-059694: (2.715258638s)
I0831 22:34:28.441879 1203385 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3576417556
I0831 22:34:28.452229 1203385 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3576417556.tar
I0831 22:34:28.461578 1203385 build_images.go:217] Built localhost/my-image:functional-059694 from /tmp/build.3576417556.tar
I0831 22:34:28.461608 1203385 build_images.go:133] succeeded building to: functional-059694
I0831 22:34:28.461614 1203385 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-059694
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image load --daemon kicbase/echo-server:functional-059694 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-059694 image load --daemon kicbase/echo-server:functional-059694 --alsologtostderr: (1.170037165s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image load --daemon kicbase/echo-server:functional-059694 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-059694 image load --daemon kicbase/echo-server:functional-059694 --alsologtostderr: (1.100924973s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 update-context --alsologtostderr -v=2
E0831 22:34:23.920776 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 update-context --alsologtostderr -v=2
E0831 22:34:23.742360 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:34:23.754111 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:34:23.772829 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:34:23.794976 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:34:23.837308 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-059694
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image load --daemon kicbase/echo-server:functional-059694 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-059694 image load --daemon kicbase/echo-server:functional-059694 --alsologtostderr: (1.098995035s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image save kicbase/echo-server:functional-059694 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image rm kicbase/echo-server:functional-059694 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image ls
E0831 22:34:24.085604 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-059694
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-059694 image save --daemon kicbase/echo-server:functional-059694 --alsologtostderr
E0831 22:34:24.407240 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-059694
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-059694
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-059694
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-059694
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (116.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-433453 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0831 22:34:34.015505 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:34:44.257307 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:35:04.738828 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:35:45.700176 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-433453 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m55.598096191s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (116.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (30.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-433453 -- rollout status deployment/busybox: (28.017940096s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-k5657 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-nvn7k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-zjxtf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-k5657 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-nvn7k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-zjxtf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-k5657 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-nvn7k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-zjxtf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (30.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-k5657 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-k5657 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-nvn7k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-nvn7k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-zjxtf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-433453 -- exec busybox-7dff88458-zjxtf -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-433453 -v=7 --alsologtostderr
E0831 22:37:07.622224 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-433453 -v=7 --alsologtostderr: (24.785756148s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-433453 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 status --output json -v=7 --alsologtostderr
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp testdata/cp-test.txt ha-433453:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2592018830/001/cp-test_ha-433453.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453:/home/docker/cp-test.txt ha-433453-m02:/home/docker/cp-test_ha-433453_ha-433453-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m02 "sudo cat /home/docker/cp-test_ha-433453_ha-433453-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453:/home/docker/cp-test.txt ha-433453-m03:/home/docker/cp-test_ha-433453_ha-433453-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m03 "sudo cat /home/docker/cp-test_ha-433453_ha-433453-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453:/home/docker/cp-test.txt ha-433453-m04:/home/docker/cp-test_ha-433453_ha-433453-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m04 "sudo cat /home/docker/cp-test_ha-433453_ha-433453-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp testdata/cp-test.txt ha-433453-m02:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2592018830/001/cp-test_ha-433453-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453-m02:/home/docker/cp-test.txt ha-433453:/home/docker/cp-test_ha-433453-m02_ha-433453.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453 "sudo cat /home/docker/cp-test_ha-433453-m02_ha-433453.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453-m02:/home/docker/cp-test.txt ha-433453-m03:/home/docker/cp-test_ha-433453-m02_ha-433453-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m03 "sudo cat /home/docker/cp-test_ha-433453-m02_ha-433453-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453-m02:/home/docker/cp-test.txt ha-433453-m04:/home/docker/cp-test_ha-433453-m02_ha-433453-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m04 "sudo cat /home/docker/cp-test_ha-433453-m02_ha-433453-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp testdata/cp-test.txt ha-433453-m03:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2592018830/001/cp-test_ha-433453-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453-m03:/home/docker/cp-test.txt ha-433453:/home/docker/cp-test_ha-433453-m03_ha-433453.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453 "sudo cat /home/docker/cp-test_ha-433453-m03_ha-433453.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453-m03:/home/docker/cp-test.txt ha-433453-m02:/home/docker/cp-test_ha-433453-m03_ha-433453-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m02 "sudo cat /home/docker/cp-test_ha-433453-m03_ha-433453-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453-m03:/home/docker/cp-test.txt ha-433453-m04:/home/docker/cp-test_ha-433453-m03_ha-433453-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m04 "sudo cat /home/docker/cp-test_ha-433453-m03_ha-433453-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp testdata/cp-test.txt ha-433453-m04:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2592018830/001/cp-test_ha-433453-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453-m04:/home/docker/cp-test.txt ha-433453:/home/docker/cp-test_ha-433453-m04_ha-433453.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453 "sudo cat /home/docker/cp-test_ha-433453-m04_ha-433453.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453-m04:/home/docker/cp-test.txt ha-433453-m02:/home/docker/cp-test_ha-433453-m04_ha-433453-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m02 "sudo cat /home/docker/cp-test_ha-433453-m04_ha-433453-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 cp ha-433453-m04:/home/docker/cp-test.txt ha-433453-m03:/home/docker/cp-test_ha-433453-m04_ha-433453-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 ssh -n ha-433453-m03 "sudo cat /home/docker/cp-test_ha-433453-m04_ha-433453-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-433453 node stop m02 -v=7 --alsologtostderr: (12.0640107s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-433453 status -v=7 --alsologtostderr: exit status 7 (734.531573ms)

                                                
                                                
-- stdout --
	ha-433453
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-433453-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-433453-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-433453-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:37:58.403157 1219606 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:37:58.403368 1219606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:37:58.403400 1219606 out.go:358] Setting ErrFile to fd 2...
	I0831 22:37:58.403420 1219606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:37:58.403687 1219606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
	I0831 22:37:58.403944 1219606 out.go:352] Setting JSON to false
	I0831 22:37:58.404012 1219606 mustload.go:65] Loading cluster: ha-433453
	I0831 22:37:58.404090 1219606 notify.go:220] Checking for updates...
	I0831 22:37:58.404475 1219606 config.go:182] Loaded profile config "ha-433453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0831 22:37:58.404514 1219606 status.go:255] checking status of ha-433453 ...
	I0831 22:37:58.405200 1219606 cli_runner.go:164] Run: docker container inspect ha-433453 --format={{.State.Status}}
	I0831 22:37:58.430433 1219606 status.go:330] ha-433453 host status = "Running" (err=<nil>)
	I0831 22:37:58.430458 1219606 host.go:66] Checking if "ha-433453" exists ...
	I0831 22:37:58.430930 1219606 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-433453")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-433453
	I0831 22:37:58.467701 1219606 host.go:66] Checking if "ha-433453" exists ...
	I0831 22:37:58.468018 1219606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:37:58.468057 1219606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-433453
	I0831 22:37:58.492368 1219606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34269 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/ha-433453/id_rsa Username:docker}
	I0831 22:37:58.589814 1219606 ssh_runner.go:195] Run: systemctl --version
	I0831 22:37:58.593939 1219606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:37:58.606044 1219606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:37:58.677105 1219606 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-31 22:37:58.66624951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:37:58.677650 1219606 kubeconfig.go:125] found "ha-433453" server: "https://192.168.49.254:8443"
	I0831 22:37:58.677684 1219606 api_server.go:166] Checking apiserver status ...
	I0831 22:37:58.677730 1219606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:37:58.692221 1219606 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1441/cgroup
	I0831 22:37:58.701784 1219606 api_server.go:182] apiserver freezer: "5:freezer:/docker/c77f895b7273764c520720ebf5c68e8ae6c5e6b05e1e641dec6a904e519a5247/kubepods/burstable/pod387fd77f4fa559cf6adca10130768221/3c427dbbe68c339bc3d7c3536f989a93f8bdfc6b4973685b0b18915f98cbaf7b"
	I0831 22:37:58.701855 1219606 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c77f895b7273764c520720ebf5c68e8ae6c5e6b05e1e641dec6a904e519a5247/kubepods/burstable/pod387fd77f4fa559cf6adca10130768221/3c427dbbe68c339bc3d7c3536f989a93f8bdfc6b4973685b0b18915f98cbaf7b/freezer.state
	I0831 22:37:58.712228 1219606 api_server.go:204] freezer state: "THAWED"
	I0831 22:37:58.712259 1219606 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0831 22:37:58.720235 1219606 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0831 22:37:58.720262 1219606 status.go:422] ha-433453 apiserver status = Running (err=<nil>)
	I0831 22:37:58.720273 1219606 status.go:257] ha-433453 status: &{Name:ha-433453 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:37:58.720290 1219606 status.go:255] checking status of ha-433453-m02 ...
	I0831 22:37:58.720718 1219606 cli_runner.go:164] Run: docker container inspect ha-433453-m02 --format={{.State.Status}}
	I0831 22:37:58.736199 1219606 status.go:330] ha-433453-m02 host status = "Stopped" (err=<nil>)
	I0831 22:37:58.736220 1219606 status.go:343] host is not running, skipping remaining checks
	I0831 22:37:58.736227 1219606 status.go:257] ha-433453-m02 status: &{Name:ha-433453-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:37:58.736247 1219606 status.go:255] checking status of ha-433453-m03 ...
	I0831 22:37:58.736568 1219606 cli_runner.go:164] Run: docker container inspect ha-433453-m03 --format={{.State.Status}}
	I0831 22:37:58.751819 1219606 status.go:330] ha-433453-m03 host status = "Running" (err=<nil>)
	I0831 22:37:58.751841 1219606 host.go:66] Checking if "ha-433453-m03" exists ...
	I0831 22:37:58.752147 1219606 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-433453")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-433453-m03
	I0831 22:37:58.771966 1219606 host.go:66] Checking if "ha-433453-m03" exists ...
	I0831 22:37:58.772266 1219606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:37:58.772309 1219606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-433453-m03
	I0831 22:37:58.789560 1219606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/ha-433453-m03/id_rsa Username:docker}
	I0831 22:37:58.881818 1219606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:37:58.894320 1219606 kubeconfig.go:125] found "ha-433453" server: "https://192.168.49.254:8443"
	I0831 22:37:58.894351 1219606 api_server.go:166] Checking apiserver status ...
	I0831 22:37:58.894413 1219606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:37:58.904867 1219606 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1323/cgroup
	I0831 22:37:58.914382 1219606 api_server.go:182] apiserver freezer: "5:freezer:/docker/93b1730dae49bc368e349cecdb7decce3b615f7ecf5500ce66f60d914abe48a4/kubepods/burstable/pod2ffe095517b4cbf2aec16c5e16172b6f/dc1200a127fd6200006159f50cb238ec25afd515bb469c82573af57321f97f7a"
	I0831 22:37:58.914454 1219606 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/93b1730dae49bc368e349cecdb7decce3b615f7ecf5500ce66f60d914abe48a4/kubepods/burstable/pod2ffe095517b4cbf2aec16c5e16172b6f/dc1200a127fd6200006159f50cb238ec25afd515bb469c82573af57321f97f7a/freezer.state
	I0831 22:37:58.923392 1219606 api_server.go:204] freezer state: "THAWED"
	I0831 22:37:58.923421 1219606 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0831 22:37:58.931055 1219606 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0831 22:37:58.931082 1219606 status.go:422] ha-433453-m03 apiserver status = Running (err=<nil>)
	I0831 22:37:58.931092 1219606 status.go:257] ha-433453-m03 status: &{Name:ha-433453-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:37:58.931113 1219606 status.go:255] checking status of ha-433453-m04 ...
	I0831 22:37:58.931431 1219606 cli_runner.go:164] Run: docker container inspect ha-433453-m04 --format={{.State.Status}}
	I0831 22:37:58.948339 1219606 status.go:330] ha-433453-m04 host status = "Running" (err=<nil>)
	I0831 22:37:58.948366 1219606 host.go:66] Checking if "ha-433453-m04" exists ...
	I0831 22:37:58.948712 1219606 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-433453")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-433453-m04
	I0831 22:37:58.965623 1219606 host.go:66] Checking if "ha-433453-m04" exists ...
	I0831 22:37:58.965922 1219606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:37:58.965965 1219606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-433453-m04
	I0831 22:37:58.982883 1219606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34284 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/ha-433453-m04/id_rsa Username:docker}
	I0831 22:37:59.077682 1219606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:37:59.091024 1219606 status.go:257] ha-433453-m04 status: &{Name:ha-433453-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-433453 node start m02 -v=7 --alsologtostderr: (17.706990808s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (138.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-433453 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-433453 -v=7 --alsologtostderr
E0831 22:38:37.568791 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:38:37.575187 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:38:37.586560 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:38:37.607901 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:38:37.649271 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:38:37.730598 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:38:37.892174 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:38:38.213993 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:38:38.855880 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:38:40.137203 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:38:42.698536 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:38:47.820806 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-433453 -v=7 --alsologtostderr: (37.369100629s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-433453 --wait=true -v=7 --alsologtostderr
E0831 22:38:58.062958 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:39:18.545116 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:39:23.741835 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:39:51.464432 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:39:59.507320 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-433453 --wait=true -v=7 --alsologtostderr: (1m40.534140166s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-433453
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (138.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-433453 node delete m03 -v=7 --alsologtostderr: (9.617478439s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 stop -v=7 --alsologtostderr
E0831 22:41:21.428915 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-433453 stop -v=7 --alsologtostderr: (36.270627862s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-433453 status -v=7 --alsologtostderr: exit status 7 (113.073047ms)

                                                
                                                
-- stdout --
	ha-433453
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-433453-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-433453-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:41:24.613490 1233957 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:41:24.613709 1233957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:41:24.613736 1233957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:41:24.613758 1233957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:41:24.614043 1233957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
	I0831 22:41:24.614285 1233957 out.go:352] Setting JSON to false
	I0831 22:41:24.614354 1233957 mustload.go:65] Loading cluster: ha-433453
	I0831 22:41:24.614499 1233957 notify.go:220] Checking for updates...
	I0831 22:41:24.614908 1233957 config.go:182] Loaded profile config "ha-433453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0831 22:41:24.614946 1233957 status.go:255] checking status of ha-433453 ...
	I0831 22:41:24.615504 1233957 cli_runner.go:164] Run: docker container inspect ha-433453 --format={{.State.Status}}
	I0831 22:41:24.633628 1233957 status.go:330] ha-433453 host status = "Stopped" (err=<nil>)
	I0831 22:41:24.633651 1233957 status.go:343] host is not running, skipping remaining checks
	I0831 22:41:24.633659 1233957 status.go:257] ha-433453 status: &{Name:ha-433453 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:41:24.633698 1233957 status.go:255] checking status of ha-433453-m02 ...
	I0831 22:41:24.633997 1233957 cli_runner.go:164] Run: docker container inspect ha-433453-m02 --format={{.State.Status}}
	I0831 22:41:24.660754 1233957 status.go:330] ha-433453-m02 host status = "Stopped" (err=<nil>)
	I0831 22:41:24.660776 1233957 status.go:343] host is not running, skipping remaining checks
	I0831 22:41:24.660783 1233957 status.go:257] ha-433453-m02 status: &{Name:ha-433453-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:41:24.660803 1233957 status.go:255] checking status of ha-433453-m04 ...
	I0831 22:41:24.661099 1233957 cli_runner.go:164] Run: docker container inspect ha-433453-m04 --format={{.State.Status}}
	I0831 22:41:24.677741 1233957 status.go:330] ha-433453-m04 host status = "Stopped" (err=<nil>)
	I0831 22:41:24.677766 1233957 status.go:343] host is not running, skipping remaining checks
	I0831 22:41:24.677774 1233957 status.go:257] ha-433453-m04 status: &{Name:ha-433453-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (51.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-433453 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-433453 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (50.463227021s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (51.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-433453 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-433453 --control-plane -v=7 --alsologtostderr: (38.041312467s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-433453 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-433453 status -v=7 --alsologtostderr: (1.015265598s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.43s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-093435 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0831 22:43:37.567333 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-093435 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (55.424522765s)
--- PASS: TestJSONOutput/start/Command (55.43s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.12s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-093435 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 pause -p json-output-093435 --output=json --user=testUser: (1.115217974s)
--- PASS: TestJSONOutput/pause/Command (1.12s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-093435 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.27s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-093435 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-093435 --output=json --user=testUser: (1.270373046s)
--- PASS: TestJSONOutput/stop/Command (1.27s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-860590 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-860590 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (70.519419ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7bebf229-ecda-4e2d-bc64-21c64a0a48ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-860590] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8a860c3-bc55-4361-b751-241897ec24ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"2ccfdfff-d700-46cf-9f86-73777b32dcbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a79d3f31-ba26-45b2-9ead-174e2639807a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig"}}
	{"specversion":"1.0","id":"0cfafca7-b7d7-4681-a250-2ac25f1f7a37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube"}}
	{"specversion":"1.0","id":"bc87a042-08bc-4b96-b022-575f48d07c07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f9cc8823-bd32-492c-8d32-c2e58b944909","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"190c7596-6da6-441f-9f6c-78a443ea1367","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-860590" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-860590
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-265693 --network=
E0831 22:44:23.742185 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-265693 --network=: (41.10015043s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-265693" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-265693
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-265693: (2.197548455s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.32s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-413069 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-413069 --network=bridge: (31.987550447s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-413069" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-413069
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-413069: (1.943253762s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.95s)

                                                
                                    
x
+
TestKicExistingNetwork (35.6s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-803982 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-803982 --network=existing-network: (33.323827034s)
helpers_test.go:176: Cleaning up "existing-network-803982" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-803982
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-803982: (2.05900087s)
--- PASS: TestKicExistingNetwork (35.60s)

                                                
                                    
x
+
TestKicCustomSubnet (30.82s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-946049 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-946049 --subnet=192.168.60.0/24: (28.698958342s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-946049 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-946049" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-946049
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-946049: (2.093458629s)
--- PASS: TestKicCustomSubnet (30.82s)

                                                
                                    
x
+
TestKicStaticIP (32.75s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-147253 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-147253 --static-ip=192.168.200.200: (30.553302314s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-147253 ip
helpers_test.go:176: Cleaning up "static-ip-147253" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-147253
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-147253: (2.042995944s)
--- PASS: TestKicStaticIP (32.75s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (65.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-149592 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-149592 --driver=docker  --container-runtime=containerd: (29.600472641s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-152177 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-152177 --driver=docker  --container-runtime=containerd: (30.829607225s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-149592
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-152177
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-152177" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-152177
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-152177: (1.948511558s)
helpers_test.go:176: Cleaning up "first-149592" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-149592
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-149592: (2.200717288s)
--- PASS: TestMinikubeProfile (65.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-779653 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-779653 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.996695648s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-779653 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-793577 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-793577 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.179467427s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-793577 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-779653 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-779653 --alsologtostderr -v=5: (1.603997664s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-793577 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-793577
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-793577: (1.192164196s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-793577
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-793577: (6.949169624s)
--- PASS: TestMountStart/serial/RestartStopped (7.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-793577 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/CreateExtnet (0.07s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/CreateExtnet
multinetwork_test.go:99: (dbg) Run:  docker network create network-extnet-656955
multinetwork_test.go:104: external network network-extnet-656955 created
--- PASS: TestContainerIPsMultiNetwork/serial/CreateExtnet (0.07s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/FreshStart (59.49s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/FreshStart
multinetwork_test.go:148: (dbg) Run:  out/minikube-linux-arm64 start -p extnet-650686 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0831 22:48:37.567414 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:49:23.742305 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
multinetwork_test.go:148: (dbg) Done: out/minikube-linux-arm64 start -p extnet-650686 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (59.46574325s)
multinetwork_test.go:161: cluster extnet-650686 started with address 192.168.67.2/
--- PASS: TestContainerIPsMultiNetwork/serial/FreshStart (59.49s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/ConnectExtnet (0.1s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/ConnectExtnet
multinetwork_test.go:113: (dbg) Run:  docker network connect network-extnet-656955 extnet-650686
multinetwork_test.go:126: cluster extnet-650686 was attached to network network-extnet-656955 with address 172.18.0.2/
--- PASS: TestContainerIPsMultiNetwork/serial/ConnectExtnet (0.10s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/Stop (5.93s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/Stop
multinetwork_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p extnet-650686 --alsologtostderr -v=5
multinetwork_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p extnet-650686 --alsologtostderr -v=5: (5.927136484s)
--- PASS: TestContainerIPsMultiNetwork/serial/Stop (5.93s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/VerifyStatus (0.07s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/VerifyStatus
helpers_test.go:700: (dbg) Run:  out/minikube-linux-arm64 status -p extnet-650686 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p extnet-650686 --output=json --layout=cluster: exit status 7 (71.804963ms)

                                                
                                                
-- stdout --
	{"Name":"extnet-650686","StatusCode":405,"StatusName":"Stopped","Step":"Done","StepDetail":"* 1 node stopped.","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":405,"StatusName":"Stopped"}},"Nodes":[{"Name":"extnet-650686","StatusCode":405,"StatusName":"Stopped","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestContainerIPsMultiNetwork/serial/VerifyStatus (0.07s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/Start (26.26s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/Start
multinetwork_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p extnet-650686 --alsologtostderr -v=5
multinetwork_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p extnet-650686 --alsologtostderr -v=5: (26.219316408s)
--- PASS: TestContainerIPsMultiNetwork/serial/Start (26.26s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/VerifyNetworks (0.02s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/VerifyNetworks
multinetwork_test.go:225: (dbg) Run:  docker inspect extnet-650686
--- PASS: TestContainerIPsMultiNetwork/serial/VerifyNetworks (0.02s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/Delete (2.43s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/Delete
multinetwork_test.go:253: (dbg) Run:  out/minikube-linux-arm64 delete -p extnet-650686 --alsologtostderr -v=5
multinetwork_test.go:253: (dbg) Done: out/minikube-linux-arm64 delete -p extnet-650686 --alsologtostderr -v=5: (2.434733182s)
--- PASS: TestContainerIPsMultiNetwork/serial/Delete (2.43s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/DeleteExtnet (0.1s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/DeleteExtnet
multinetwork_test.go:136: (dbg) Run:  docker network rm network-extnet-656955
multinetwork_test.go:140: external network network-extnet-656955 deleted
--- PASS: TestContainerIPsMultiNetwork/serial/DeleteExtnet (0.10s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/VerifyDeletedResources (0.12s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/VerifyDeletedResources
multinetwork_test.go:263: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
multinetwork_test.go:289: (dbg) Run:  docker ps -a
multinetwork_test.go:294: (dbg) Run:  docker volume inspect extnet-650686
multinetwork_test.go:294: (dbg) Non-zero exit: docker volume inspect extnet-650686: exit status 1 (15.6625ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get extnet-650686: no such volume

                                                
                                                
** /stderr **
multinetwork_test.go:299: (dbg) Run:  docker network ls
--- PASS: TestContainerIPsMultiNetwork/serial/VerifyDeletedResources (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-396098 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0831 22:50:46.826245 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-396098 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m3.857032219s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (16.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-396098 -- rollout status deployment/busybox: (14.846990717s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- exec busybox-7dff88458-954nc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- exec busybox-7dff88458-n4kdn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- exec busybox-7dff88458-954nc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- exec busybox-7dff88458-n4kdn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- exec busybox-7dff88458-954nc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- exec busybox-7dff88458-n4kdn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (16.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- exec busybox-7dff88458-954nc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- exec busybox-7dff88458-954nc -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- exec busybox-7dff88458-n4kdn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-396098 -- exec busybox-7dff88458-n4kdn -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-396098 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-396098 -v 3 --alsologtostderr: (15.667857753s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-396098 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 status --output json --alsologtostderr
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 cp testdata/cp-test.txt multinode-396098:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 cp multinode-396098:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3041471834/001/cp-test_multinode-396098.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 cp multinode-396098:/home/docker/cp-test.txt multinode-396098-m02:/home/docker/cp-test_multinode-396098_multinode-396098-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098-m02 "sudo cat /home/docker/cp-test_multinode-396098_multinode-396098-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 cp multinode-396098:/home/docker/cp-test.txt multinode-396098-m03:/home/docker/cp-test_multinode-396098_multinode-396098-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098-m03 "sudo cat /home/docker/cp-test_multinode-396098_multinode-396098-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 cp testdata/cp-test.txt multinode-396098-m02:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 cp multinode-396098-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3041471834/001/cp-test_multinode-396098-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 cp multinode-396098-m02:/home/docker/cp-test.txt multinode-396098:/home/docker/cp-test_multinode-396098-m02_multinode-396098.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098 "sudo cat /home/docker/cp-test_multinode-396098-m02_multinode-396098.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 cp multinode-396098-m02:/home/docker/cp-test.txt multinode-396098-m03:/home/docker/cp-test_multinode-396098-m02_multinode-396098-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098-m03 "sudo cat /home/docker/cp-test_multinode-396098-m02_multinode-396098-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 cp testdata/cp-test.txt multinode-396098-m03:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 cp multinode-396098-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3041471834/001/cp-test_multinode-396098-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 cp multinode-396098-m03:/home/docker/cp-test.txt multinode-396098:/home/docker/cp-test_multinode-396098-m03_multinode-396098.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098 "sudo cat /home/docker/cp-test_multinode-396098-m03_multinode-396098.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 cp multinode-396098-m03:/home/docker/cp-test.txt multinode-396098-m02:/home/docker/cp-test_multinode-396098-m03_multinode-396098-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 ssh -n multinode-396098-m02 "sudo cat /home/docker/cp-test_multinode-396098-m03_multinode-396098-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-396098 node stop m03: (1.226933424s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-396098 status: exit status 7 (517.533745ms)

                                                
                                                
-- stdout --
	multinode-396098
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-396098-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-396098-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-396098 status --alsologtostderr: exit status 7 (518.49174ms)

                                                
                                                
-- stdout --
	multinode-396098
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-396098-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-396098-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:52:01.222871 1293098 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:52:01.223019 1293098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:52:01.223032 1293098 out.go:358] Setting ErrFile to fd 2...
	I0831 22:52:01.223038 1293098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:52:01.223282 1293098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
	I0831 22:52:01.223479 1293098 out.go:352] Setting JSON to false
	I0831 22:52:01.223525 1293098 mustload.go:65] Loading cluster: multinode-396098
	I0831 22:52:01.223620 1293098 notify.go:220] Checking for updates...
	I0831 22:52:01.223953 1293098 config.go:182] Loaded profile config "multinode-396098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0831 22:52:01.223964 1293098 status.go:255] checking status of multinode-396098 ...
	I0831 22:52:01.224869 1293098 cli_runner.go:164] Run: docker container inspect multinode-396098 --format={{.State.Status}}
	I0831 22:52:01.248184 1293098 status.go:330] multinode-396098 host status = "Running" (err=<nil>)
	I0831 22:52:01.248225 1293098 host.go:66] Checking if "multinode-396098" exists ...
	I0831 22:52:01.248674 1293098 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-396098")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-396098
	I0831 22:52:01.271215 1293098 host.go:66] Checking if "multinode-396098" exists ...
	I0831 22:52:01.271573 1293098 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:52:01.271632 1293098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-396098
	I0831 22:52:01.292535 1293098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34414 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/multinode-396098/id_rsa Username:docker}
	I0831 22:52:01.389964 1293098 ssh_runner.go:195] Run: systemctl --version
	I0831 22:52:01.394338 1293098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:52:01.406407 1293098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:52:01.467173 1293098 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-31 22:52:01.456610592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:52:01.468065 1293098 kubeconfig.go:125] found "multinode-396098" server: "https://192.168.67.2:8443"
	I0831 22:52:01.468096 1293098 api_server.go:166] Checking apiserver status ...
	I0831 22:52:01.468215 1293098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:52:01.479762 1293098 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	I0831 22:52:01.489727 1293098 api_server.go:182] apiserver freezer: "5:freezer:/docker/6c10c9b54f31e4f57594fb2bf6cd8b40bf91369d8c94f65bee34408bf5680490/kubepods/burstable/pod00d6bcd0c17191f3846deeed1371cc4b/762ef8032ac3595f62fa8e8ca4206058b15c8ade30b671c0d769c60949369d49"
	I0831 22:52:01.489804 1293098 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6c10c9b54f31e4f57594fb2bf6cd8b40bf91369d8c94f65bee34408bf5680490/kubepods/burstable/pod00d6bcd0c17191f3846deeed1371cc4b/762ef8032ac3595f62fa8e8ca4206058b15c8ade30b671c0d769c60949369d49/freezer.state
	I0831 22:52:01.499800 1293098 api_server.go:204] freezer state: "THAWED"
	I0831 22:52:01.499833 1293098 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0831 22:52:01.507584 1293098 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0831 22:52:01.507610 1293098 status.go:422] multinode-396098 apiserver status = Running (err=<nil>)
	I0831 22:52:01.507622 1293098 status.go:257] multinode-396098 status: &{Name:multinode-396098 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:52:01.507648 1293098 status.go:255] checking status of multinode-396098-m02 ...
	I0831 22:52:01.507964 1293098 cli_runner.go:164] Run: docker container inspect multinode-396098-m02 --format={{.State.Status}}
	I0831 22:52:01.525263 1293098 status.go:330] multinode-396098-m02 host status = "Running" (err=<nil>)
	I0831 22:52:01.525302 1293098 host.go:66] Checking if "multinode-396098-m02" exists ...
	I0831 22:52:01.525605 1293098 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-396098")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-396098-m02
	I0831 22:52:01.542449 1293098 host.go:66] Checking if "multinode-396098-m02" exists ...
	I0831 22:52:01.542872 1293098 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:52:01.542965 1293098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-396098-m02
	I0831 22:52:01.559409 1293098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34419 SSHKeyPath:/home/jenkins/minikube-integration/18943-1161402/.minikube/machines/multinode-396098-m02/id_rsa Username:docker}
	I0831 22:52:01.649889 1293098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:52:01.661987 1293098 status.go:257] multinode-396098-m02 status: &{Name:multinode-396098-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:52:01.662025 1293098 status.go:255] checking status of multinode-396098-m03 ...
	I0831 22:52:01.662342 1293098 cli_runner.go:164] Run: docker container inspect multinode-396098-m03 --format={{.State.Status}}
	I0831 22:52:01.679015 1293098 status.go:330] multinode-396098-m03 host status = "Stopped" (err=<nil>)
	I0831 22:52:01.679044 1293098 status.go:343] host is not running, skipping remaining checks
	I0831 22:52:01.679051 1293098 status.go:257] multinode-396098-m03 status: &{Name:multinode-396098-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-396098 node start m03 -v=7 --alsologtostderr: (8.671454531s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-396098
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-396098
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-396098: (24.930690104s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-396098 --wait=true -v=8 --alsologtostderr
E0831 22:53:37.567224 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-396098 --wait=true -v=8 --alsologtostderr: (1m13.045518576s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-396098
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-396098 node delete m03: (4.885094562s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-396098 stop: (23.751187344s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-396098 status: exit status 7 (94.842829ms)

                                                
                                                
-- stdout --
	multinode-396098
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-396098-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-396098 status --alsologtostderr: exit status 7 (87.874182ms)

                                                
                                                
-- stdout --
	multinode-396098
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-396098-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:54:18.691262 1301582 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:54:18.691393 1301582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:54:18.691405 1301582 out.go:358] Setting ErrFile to fd 2...
	I0831 22:54:18.691410 1301582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:54:18.691659 1301582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
	I0831 22:54:18.691852 1301582 out.go:352] Setting JSON to false
	I0831 22:54:18.691896 1301582 mustload.go:65] Loading cluster: multinode-396098
	I0831 22:54:18.692046 1301582 notify.go:220] Checking for updates...
	I0831 22:54:18.692295 1301582 config.go:182] Loaded profile config "multinode-396098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0831 22:54:18.692314 1301582 status.go:255] checking status of multinode-396098 ...
	I0831 22:54:18.693141 1301582 cli_runner.go:164] Run: docker container inspect multinode-396098 --format={{.State.Status}}
	I0831 22:54:18.711023 1301582 status.go:330] multinode-396098 host status = "Stopped" (err=<nil>)
	I0831 22:54:18.711043 1301582 status.go:343] host is not running, skipping remaining checks
	I0831 22:54:18.711051 1301582 status.go:257] multinode-396098 status: &{Name:multinode-396098 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:54:18.711088 1301582 status.go:255] checking status of multinode-396098-m02 ...
	I0831 22:54:18.711414 1301582 cli_runner.go:164] Run: docker container inspect multinode-396098-m02 --format={{.State.Status}}
	I0831 22:54:18.730180 1301582 status.go:330] multinode-396098-m02 host status = "Stopped" (err=<nil>)
	I0831 22:54:18.730208 1301582 status.go:343] host is not running, skipping remaining checks
	I0831 22:54:18.730215 1301582 status.go:257] multinode-396098-m02 status: &{Name:multinode-396098-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-396098 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0831 22:54:23.742535 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:55:00.632519 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-396098 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.536220753s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-396098 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.21s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-396098
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-396098-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-396098-m02 --driver=docker  --container-runtime=containerd: exit status 14 (84.849306ms)

                                                
                                                
-- stdout --
	* [multinode-396098-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-396098-m02' is duplicated with machine name 'multinode-396098-m02' in profile 'multinode-396098'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-396098-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-396098-m03 --driver=docker  --container-runtime=containerd: (31.033752225s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-396098
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-396098: exit status 80 (309.40081ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-396098 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-396098-m03 already exists in multinode-396098-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-396098-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-396098-m03: (1.935712466s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.41s)

                                                
                                    
x
+
TestPreload (110.26s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-125486 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-125486 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m13.166791927s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-125486 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-125486 image pull gcr.io/k8s-minikube/busybox: (2.088650582s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-125486
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-125486: (12.088006102s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-125486 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-125486 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.159745302s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-125486 image list
helpers_test.go:176: Cleaning up "test-preload-125486" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-125486
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-125486: (2.393294708s)
--- PASS: TestPreload (110.26s)

                                                
                                    
x
+
TestScheduledStopUnix (107.19s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-058663 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-058663 --memory=2048 --driver=docker  --container-runtime=containerd: (29.999347661s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-058663 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-058663 -n scheduled-stop-058663
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-058663 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-058663 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-058663 -n scheduled-stop-058663
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-058663
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-058663 --schedule 15s
E0831 22:58:37.569581 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-058663
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-058663: exit status 7 (65.643581ms)

                                                
                                                
-- stdout --
	scheduled-stop-058663
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-058663 -n scheduled-stop-058663
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-058663 -n scheduled-stop-058663: exit status 7 (63.015259ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-058663" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-058663
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-058663: (5.626201768s)
--- PASS: TestScheduledStopUnix (107.19s)

                                                
                                    
x
+
TestInsufficientStorage (10.24s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-389490 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E0831 22:59:23.742320 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-389490 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.827506073s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"770ca4c1-44be-45e2-87cb-ecb4816ec96c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-389490] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"68638b3b-ab15-40a2-a656-d11b354c098b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"e1385db3-d7dc-40f5-9f21-c3147feacb1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"43d9f700-c722-4403-9e2d-1c52fac0675a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig"}}
	{"specversion":"1.0","id":"db4c1867-0019-4422-83bd-68c12aaefdac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube"}}
	{"specversion":"1.0","id":"7b201214-8230-40c6-b48f-5b070badc1f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7fbcd869-0a24-4991-9be7-c2dd11a477a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3c04fd08-790b-4c5e-ae03-466d95de532b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c617b1d9-7765-42de-adce-760590dd4c43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"173c6fe0-2787-4a77-86dc-453af833203f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"91df8860-fe4d-40e5-b7b9-b787baf9b33a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"31e98e5c-f13b-4ed0-bd39-0f5240025c53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-389490\" primary control-plane node in \"insufficient-storage-389490\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b931d2c-062d-46a9-bfcd-ebc231d0f581","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1724862063-19530 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e87144cc-ba96-40d9-80f2-f2570640e830","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a633c0c2-27bf-45d7-ba25-d29e2b217e2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:700: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-389490 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-389490 --output=json --layout=cluster: exit status 7 (273.542509ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-389490","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-389490","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 22:59:30.858323 1320163 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-389490" does not appear in /home/jenkins/minikube-integration/18943-1161402/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:700: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-389490 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-389490 --output=json --layout=cluster: exit status 7 (285.633962ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-389490","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-389490","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 22:59:31.141672 1320226 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-389490" does not appear in /home/jenkins/minikube-integration/18943-1161402/kubeconfig
	E0831 22:59:31.151940 1320226 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/insufficient-storage-389490/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-389490" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-389490
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-389490: (1.855019022s)
--- PASS: TestInsufficientStorage (10.24s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (72.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3199401803 start -p running-upgrade-269890 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3199401803 start -p running-upgrade-269890 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (35.516786212s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-269890 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-269890 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.784605515s)
helpers_test.go:176: Cleaning up "running-upgrade-269890" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-269890
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-269890: (2.403727679s)
--- PASS: TestRunningBinaryUpgrade (72.47s)

                                                
                                    
x
+
TestKubernetesUpgrade (344.52s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-369182 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-369182 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.308716298s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-369182
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-369182: (1.455640909s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-369182 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-369182 status --format={{.Host}}: exit status 7 (93.166241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-369182 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-369182 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m33.38895439s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-369182 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-369182 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-369182 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (76.182211ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-369182] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-369182
	    minikube start -p kubernetes-upgrade-369182 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3691822 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-369182 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-369182 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-369182 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.862204002s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-369182" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-369182
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-369182: (2.168958251s)
--- PASS: TestKubernetesUpgrade (344.52s)

                                                
                                    
x
+
TestMissingContainerUpgrade (182.1s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1833962150 start -p missing-upgrade-443166 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1833962150 start -p missing-upgrade-443166 --memory=2200 --driver=docker  --container-runtime=containerd: (1m41.293957854s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-443166
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-443166: (10.287440816s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-443166
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-443166 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-443166 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.898074885s)
helpers_test.go:176: Cleaning up "missing-upgrade-443166" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-443166
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-443166: (2.242373454s)
--- PASS: TestMissingContainerUpgrade (182.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-837856 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-837856 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (77.352816ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-837856] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-837856 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-837856 --driver=docker  --container-runtime=containerd: (38.953099127s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-837856 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-837856 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-837856 --no-kubernetes --driver=docker  --container-runtime=containerd: (17.412170108s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-837856 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-837856 status -o json: exit status 2 (416.524885ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-837856","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-837856
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-837856: (3.560634919s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-837856 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-837856 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.585340397s)
--- PASS: TestNoKubernetes/serial/Start (6.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-837856 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-837856 "sudo systemctl is-active --quiet service kubelet": exit status 1 (261.895191ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-837856
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-837856: (1.189792509s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-837856 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-837856 --driver=docker  --container-runtime=containerd: (6.577763218s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-837856 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-837856 "sudo systemctl is-active --quiet service kubelet": exit status 1 (330.282123ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (106.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2437751044 start -p stopped-upgrade-928721 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2437751044 start -p stopped-upgrade-928721 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.635649813s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2437751044 -p stopped-upgrade-928721 stop
E0831 23:03:37.568376 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2437751044 -p stopped-upgrade-928721 stop: (19.968952663s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-928721 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-928721 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.170786949s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (106.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-928721
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestPause/serial/Start (60.06s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-779774 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-779774 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m0.063024663s)
--- PASS: TestPause/serial/Start (60.06s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-779774 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-779774 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.228097072s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.24s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-779774 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
helpers_test.go:700: (dbg) Run:  out/minikube-linux-arm64 status -p pause-779774 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-779774 --output=json --layout=cluster: exit status 2 (360.186854ms)

                                                
                                                
-- stdout --
	{"Name":"pause-779774","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-779774","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-779774 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.85s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.97s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-779774 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.97s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.87s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-779774 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-779774 --alsologtostderr -v=5: (2.870749231s)
--- PASS: TestPause/serial/DeletePaused (2.87s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.52s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-779774
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-779774: exit status 1 (33.810489ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-779774: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-263741 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-263741 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (229.488932ms)

                                                
                                                
-- stdout --
	* [false-263741] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 23:06:56.828785 1360696 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:06:56.828988 1360696 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:06:56.829013 1360696 out.go:358] Setting ErrFile to fd 2...
	I0831 23:06:56.829031 1360696 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:06:56.829386 1360696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-1161402/.minikube/bin
	I0831 23:06:56.829923 1360696 out.go:352] Setting JSON to false
	I0831 23:06:56.830883 1360696 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24566,"bootTime":1725121051,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0831 23:06:56.831002 1360696 start.go:139] virtualization:  
	I0831 23:06:56.834482 1360696 out.go:177] * [false-263741] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 23:06:56.837525 1360696 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 23:06:56.837593 1360696 notify.go:220] Checking for updates...
	I0831 23:06:56.842508 1360696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 23:06:56.845216 1360696 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-1161402/kubeconfig
	I0831 23:06:56.847695 1360696 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-1161402/.minikube
	I0831 23:06:56.850120 1360696 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 23:06:56.852885 1360696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 23:06:56.855872 1360696 config.go:182] Loaded profile config "force-systemd-flag-256645": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0831 23:06:56.856017 1360696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 23:06:56.891794 1360696 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 23:06:56.891959 1360696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 23:06:56.976979 1360696 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-31 23:06:56.966896808 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 23:06:56.977090 1360696 docker.go:307] overlay module found
	I0831 23:06:56.980035 1360696 out.go:177] * Using the docker driver based on user configuration
	I0831 23:06:56.982441 1360696 start.go:297] selected driver: docker
	I0831 23:06:56.982457 1360696 start.go:901] validating driver "docker" against <nil>
	I0831 23:06:56.982470 1360696 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 23:06:56.985638 1360696 out.go:201] 
	W0831 23:06:56.988208 1360696 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0831 23:06:56.990671 1360696 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-263741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-263741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-263741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-263741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-263741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-263741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-263741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-263741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-263741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-263741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-263741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-263741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-263741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-263741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-263741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-263741"

                                                
                                                
----------------------- debugLogs end: false-263741 [took: 4.485939564s] --------------------------------
helpers_test.go:176: Cleaning up "false-263741" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-263741
--- PASS: TestNetworkPlugins/group/false (4.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (156.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-777320 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0831 23:08:37.567027 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:09:23.742182 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-777320 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m36.268368257s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (156.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-039701 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-039701 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m16.493191653s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-777320 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [6b5374e1-e615-479b-a654-03c4d1c21536] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "busybox" [6b5374e1-e615-479b-a654-03c4d1c21536] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005192301s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-777320 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-777320 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-777320 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.198189582s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-777320 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-777320 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-777320 --alsologtostderr -v=3: (13.554344024s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-777320 -n old-k8s-version-777320
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-777320 -n old-k8s-version-777320: exit status 7 (84.203543ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-777320 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-039701 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [95329da1-813b-4287-a5be-75f69c8e2151] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "busybox" [95329da1-813b-4287-a5be-75f69c8e2151] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003724877s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-039701 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-039701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-039701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.067704224s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-039701 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-039701 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-039701 --alsologtostderr -v=3: (12.085058459s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-039701 -n no-preload-039701
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-039701 -n no-preload-039701: exit status 7 (72.215504ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-039701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-039701 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0831 23:13:37.566815 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:14:23.742138 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-039701 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m27.312283537s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-039701 -n no-preload-039701
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-885dm" [201cfef2-f9aa-4da1-80b3-eae84f55d3fa] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003264525s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-885dm" [201cfef2-f9aa-4da1-80b3-eae84f55d3fa] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004351075s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-039701 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-039701 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-039701 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-039701 -n no-preload-039701
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-039701 -n no-preload-039701: exit status 2 (334.370155ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-039701 -n no-preload-039701
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-039701 -n no-preload-039701: exit status 2 (328.01753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-039701 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-039701 -n no-preload-039701
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-039701 -n no-preload-039701
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-642101 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-642101 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (55.007243142s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-cd95d586-m6vqc" [744701c8-7331-4303-871f-bc39bd1576f7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006536581s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-cd95d586-m6vqc" [744701c8-7331-4303-871f-bc39bd1576f7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004360307s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-777320 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-777320 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-777320 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-777320 --alsologtostderr -v=1: (1.102916633s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-777320 -n old-k8s-version-777320
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-777320 -n old-k8s-version-777320: exit status 2 (476.82846ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-777320 -n old-k8s-version-777320
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-777320 -n old-k8s-version-777320: exit status 2 (493.368485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-777320 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-777320 --alsologtostderr -v=1: (1.316868095s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-777320 -n old-k8s-version-777320
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-777320 -n old-k8s-version-777320
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-223442 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-223442 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (51.814183771s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-642101 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [6c401fa1-6036-4628-b94b-355ed555cab7] Pending
helpers_test.go:345: "busybox" [6c401fa1-6036-4628-b94b-355ed555cab7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "busybox" [6c401fa1-6036-4628-b94b-355ed555cab7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004313633s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-642101 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-642101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-642101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.106384206s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-642101 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-642101 --alsologtostderr -v=3
E0831 23:18:37.567898 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-642101 --alsologtostderr -v=3: (12.226737429s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-642101 -n embed-certs-642101
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-642101 -n embed-certs-642101: exit status 7 (77.143412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-642101 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (289.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-642101 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-642101 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m49.027404555s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-642101 -n embed-certs-642101
E0831 23:23:37.566878 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (289.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-223442 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [94909213-2506-4909-a923-861b954f13e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "busybox" [94909213-2506-4909-a923-861b954f13e1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003572315s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-223442 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-223442 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-223442 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.465559091s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-223442 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-223442 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-223442 --alsologtostderr -v=3: (12.253784272s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-223442 -n default-k8s-diff-port-223442
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-223442 -n default-k8s-diff-port-223442: exit status 7 (68.87826ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-223442 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (265.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-223442 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0831 23:19:23.742628 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:06.161129 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:06.167606 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:06.178970 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:06.200367 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:06.241826 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:06.323323 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:06.484830 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:06.806739 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:07.448397 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:08.730437 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:11.292332 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:16.414533 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:26.655857 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:47.138279 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:21.200775 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:21.207229 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:21.218587 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:21.240009 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:21.281470 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:21.362912 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:21.524453 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:21.846138 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:22.487933 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:23.769794 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:26.332298 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:28.100033 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:31.455250 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:41.697501 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:02.178810 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-223442 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m25.51899037s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-223442 -n default-k8s-diff-port-223442
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (265.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-sw6rj" [72aab22f-dd7f-494f-b7c7-4b3c0d2d2a89] Running
E0831 23:23:43.141001 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0051647s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-sw6rj" [72aab22f-dd7f-494f-b7c7-4b3c0d2d2a89] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004515183s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-642101 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-w7pq8" [ad056e3e-5298-4b59-8935-67aa7928cb18] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004528242s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-642101 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-642101 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-642101 -n embed-certs-642101
E0831 23:23:50.022088 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-642101 -n embed-certs-642101: exit status 2 (360.240825ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-642101 -n embed-certs-642101
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-642101 -n embed-certs-642101: exit status 2 (336.633905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-642101 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-642101 -n embed-certs-642101
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-642101 -n embed-certs-642101
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-w7pq8" [ad056e3e-5298-4b59-8935-67aa7928cb18] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004745396s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-223442 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-223442 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-223442 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-223442 --alsologtostderr -v=1: (1.149126845s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-223442 -n default-k8s-diff-port-223442
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-223442 -n default-k8s-diff-port-223442: exit status 2 (473.235514ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-223442 -n default-k8s-diff-port-223442
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-223442 -n default-k8s-diff-port-223442: exit status 2 (447.042182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-223442 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-223442 -n default-k8s-diff-port-223442
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-223442 -n default-k8s-diff-port-223442
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-829097 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-829097 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (41.461888552s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (56.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-263741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0831 23:24:06.829493 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:23.742417 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-263741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (56.245197691s)
--- PASS: TestNetworkPlugins/group/auto/Start (56.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-829097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-829097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.107132068s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-829097 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-829097 --alsologtostderr -v=3: (1.302620034s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-829097 -n newest-cni-829097
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-829097 -n newest-cni-829097: exit status 7 (74.491076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-829097 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-829097 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-829097 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (15.883940889s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-829097 -n newest-cni-829097
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-829097 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-829097 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-829097 --alsologtostderr -v=1: (1.074671928s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-829097 -n newest-cni-829097
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-829097 -n newest-cni-829097: exit status 2 (340.927708ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-829097 -n newest-cni-829097
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-829097 -n newest-cni-829097: exit status 2 (329.072523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-829097 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-829097 -n newest-cni-829097
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-829097 -n newest-cni-829097
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-263741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-263741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-stm7g" [3e23a139-5182-4307-82f5-3050f9220cc7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-stm7g" [3e23a139-5182-4307-82f5-3050f9220cc7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.002701918s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-263741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0831 23:25:05.063091 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-263741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m1.538185755s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-263741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-263741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-263741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-263741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-263741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m0.634749411s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:345: "kindnet-cj9jn" [cf89a58e-b396-406b-9a74-3cc05c26bdf0] Running
E0831 23:26:06.161298 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/old-k8s-version-777320/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004347315s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-263741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-263741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-xlzd5" [f3523af9-2c65-4876-ab17-b8ee64386345] Pending
helpers_test.go:345: "netcat-6fc964789b-xlzd5" [f3523af9-2c65-4876-ab17-b8ee64386345] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-xlzd5" [f3523af9-2c65-4876-ab17-b8ee64386345] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003671259s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-263741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-263741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-263741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:345: "calico-node-9565r" [f6d7a36e-aeeb-485f-b1ad-6bebe0125e37] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005233818s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-263741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-263741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (55.040007777s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-263741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-263741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-s5mwf" [b069d5f4-0fe2-4941-8e0f-ed183acc5b07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-s5mwf" [b069d5f4-0fe2-4941-8e0f-ed183acc5b07] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.004524511s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-263741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-263741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-263741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-263741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-263741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m10.995355593s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-263741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-263741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-8b9qt" [d25cc8a8-c306-4ef7-a18f-ac16d1e710bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-8b9qt" [d25cc8a8-c306-4ef7-a18f-ac16d1e710bd] Running
E0831 23:27:48.905234 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/no-preload-039701/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004969452s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-263741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-263741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-263741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-263741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0831 23:28:20.635897 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:28:37.566913 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/functional-059694/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-263741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (53.004861841s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-263741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-263741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-ds2zf" [49372af8-aa9b-4df3-a31b-1304afc2c70a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-ds2zf" [49372af8-aa9b-4df3-a31b-1304afc2c70a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003814678s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-263741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-263741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-263741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:345: "kube-flannel-ds-tfvfv" [aceb807f-ff68-4421-9ad1-6ad0174ac95a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00412466s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (49.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-263741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-263741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (49.120430557s)
--- PASS: TestNetworkPlugins/group/bridge/Start (49.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-263741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-263741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-pd6vw" [3d85b2f4-6cbb-4fcd-ae17-c6e7d3490290] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0831 23:29:16.044092 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/default-k8s-diff-port-223442/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:345: "netcat-6fc964789b-pd6vw" [3d85b2f4-6cbb-4fcd-ae17-c6e7d3490290] Running
E0831 23:29:23.741975 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/addons-516593/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005019212s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-263741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-263741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-263741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-263741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-263741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-dnztc" [3af24f59-7430-42f0-acb0-f7d06bf725f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0831 23:30:01.285497 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/auto-263741/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:30:01.292068 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/auto-263741/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:30:01.303618 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/auto-263741/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:30:01.325200 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/auto-263741/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:30:01.366573 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/auto-263741/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:30:01.448189 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/auto-263741/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:30:01.609813 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/auto-263741/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:30:01.931522 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/auto-263741/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:30:02.573833 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/auto-263741/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:30:03.856388 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/auto-263741/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:345: "netcat-6fc964789b-dnztc" [3af24f59-7430-42f0-acb0-f7d06bf725f0] Running
E0831 23:30:06.418341 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/auto-263741/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003390084s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-263741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-263741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-263741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0831 23:30:11.539851 1166785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-1161402/.minikube/profiles/auto-263741/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (28/338)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.6s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-630217 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-630217" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-630217
--- SKIP: TestDownloadOnlyKic (0.60s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-548541" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-548541
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-263741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-263741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-263741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-263741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-263741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-263741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-263741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-263741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-263741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-263741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-263741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-263741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-263741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-263741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-263741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-263741"

                                                
                                                
----------------------- debugLogs end: kubenet-263741 [took: 4.26060133s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-263741" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-263741
--- SKIP: TestNetworkPlugins/group/kubenet (4.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-263741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-263741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-263741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-263741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263741"

                                                
                                                
----------------------- debugLogs end: cilium-263741 [took: 5.577924308s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-263741" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-263741
--- SKIP: TestNetworkPlugins/group/cilium (5.88s)

                                                
                                    
Copied to clipboard