Test Report: Docker_Linux_containerd_arm64 19389

                    
                      4e9c16444aca391b349fd87cc48c80a0a38d518e:2024-08-07:35690
                    
                

Test fail (2/336)

Order failed test Duration
38 TestAddons/serial/Volcano 199.97
311 TestStartStop/group/old-k8s-version/serial/SecondStart 380.43
x
+
TestAddons/serial/Volcano (199.97s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 49.622094ms
addons_test.go:905: volcano-admission stabilized in 49.669748ms
addons_test.go:897: volcano-scheduler stabilized in 49.716213ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-wwz7p" [1677c20b-14be-431c-b7f2-29079a595ffa] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004539261s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-n945q" [f031dc59-8ae5-44cf-a793-7df186683f27] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.009613355s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-2j5hv" [008518d2-b267-4a79-a9f7-0654353f7e41] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.008527544s
addons_test.go:932: (dbg) Run:  kubectl --context addons-553671 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-553671 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-553671 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [1b2bffb4-a4c5-4ad8-829a-a45a81445790] Pending
helpers_test.go:344: "test-job-nginx-0" [1b2bffb4-a4c5-4ad8-829a-a45a81445790] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-553671 -n addons-553671
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-07 18:37:22.251462379 +0000 UTC m=+447.251840153
addons_test.go:964: (dbg) Run:  kubectl --context addons-553671 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-553671 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-37ec4f48-0441-4c9e-a815-057eb5d4bf7e
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kn7dm (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-kn7dm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-553671 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-553671 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-553671
helpers_test.go:235: (dbg) docker inspect addons-553671:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "957c5888e206f54f736402709a8c10fe7bb991a635b2d9ca325bacfdbc2d1e7d",
	        "Created": "2024-08-07T18:30:46.4472641Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 449996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-07T18:30:46.584220338Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3c2a9878c3c4bba39f30158565171acf4131a22446ec76f61f10b90a1f2f9e07",
	        "ResolvConfPath": "/var/lib/docker/containers/957c5888e206f54f736402709a8c10fe7bb991a635b2d9ca325bacfdbc2d1e7d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/957c5888e206f54f736402709a8c10fe7bb991a635b2d9ca325bacfdbc2d1e7d/hostname",
	        "HostsPath": "/var/lib/docker/containers/957c5888e206f54f736402709a8c10fe7bb991a635b2d9ca325bacfdbc2d1e7d/hosts",
	        "LogPath": "/var/lib/docker/containers/957c5888e206f54f736402709a8c10fe7bb991a635b2d9ca325bacfdbc2d1e7d/957c5888e206f54f736402709a8c10fe7bb991a635b2d9ca325bacfdbc2d1e7d-json.log",
	        "Name": "/addons-553671",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-553671:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-553671",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b6e3db5d31ef0069164ebfc2e912cbdf6f0eff4a73ef9760e649312bca3b621c-init/diff:/var/lib/docker/overlay2/fb306904e51181155093d9f5e1422a0780db1826017288d8ca0dfbf62d428a72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b6e3db5d31ef0069164ebfc2e912cbdf6f0eff4a73ef9760e649312bca3b621c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b6e3db5d31ef0069164ebfc2e912cbdf6f0eff4a73ef9760e649312bca3b621c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b6e3db5d31ef0069164ebfc2e912cbdf6f0eff4a73ef9760e649312bca3b621c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-553671",
	                "Source": "/var/lib/docker/volumes/addons-553671/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-553671",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-553671",
	                "name.minikube.sigs.k8s.io": "addons-553671",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58a08b75a667bbf203a251e93b4d5360138c1bccc85d48b66282fc31727a5a66",
	            "SandboxKey": "/var/run/docker/netns/58a08b75a667",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-553671": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "28336e75ad3e8b17fb944cb36d9b6d356744b1e371cf6e664d6eec0aeebebd87",
	                    "EndpointID": "e32aba92b3a4bd1fca3366cd85adc65892743310c258092bc6005a2c40c77fc7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-553671",
	                        "957c5888e206"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-553671 -n addons-553671
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-553671 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-553671 logs -n 25: (1.610112698s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-777799   | jenkins | v1.33.1 | 07 Aug 24 18:29 UTC |                     |
	|         | -p download-only-777799              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| delete  | -p download-only-777799              | download-only-777799   | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| start   | -o=json --download-only              | download-only-547887   | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC |                     |
	|         | -p download-only-547887              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| delete  | -p download-only-547887              | download-only-547887   | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| start   | -o=json --download-only              | download-only-156624   | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC |                     |
	|         | -p download-only-156624              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| delete  | -p download-only-156624              | download-only-156624   | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| delete  | -p download-only-777799              | download-only-777799   | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| delete  | -p download-only-547887              | download-only-547887   | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| delete  | -p download-only-156624              | download-only-156624   | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| start   | --download-only -p                   | download-docker-768978 | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC |                     |
	|         | download-docker-768978               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-768978            | download-docker-768978 | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| start   | --download-only -p                   | binary-mirror-181151   | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC |                     |
	|         | binary-mirror-181151                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40631               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-181151              | binary-mirror-181151   | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| addons  | disable dashboard -p                 | addons-553671          | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC |                     |
	|         | addons-553671                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-553671          | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC |                     |
	|         | addons-553671                        |                        |         |         |                     |                     |
	| start   | -p addons-553671 --wait=true         | addons-553671          | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:34 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 18:30:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 18:30:21.976327  449501 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:30:21.976495  449501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:30:21.976504  449501 out.go:304] Setting ErrFile to fd 2...
	I0807 18:30:21.976509  449501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:30:21.976774  449501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
	I0807 18:30:21.977231  449501 out.go:298] Setting JSON to false
	I0807 18:30:21.978122  449501 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7973,"bootTime":1723047449,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0807 18:30:21.978197  449501 start.go:139] virtualization:  
	I0807 18:30:21.980782  449501 out.go:177] * [addons-553671] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0807 18:30:21.983396  449501 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:30:21.983495  449501 notify.go:220] Checking for updates...
	I0807 18:30:21.986946  449501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:30:21.988697  449501 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	I0807 18:30:21.990642  449501 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	I0807 18:30:21.992387  449501 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0807 18:30:21.994172  449501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:30:21.996613  449501 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 18:30:22.020602  449501 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0807 18:30:22.020729  449501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 18:30:22.086352  449501 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-07 18:30:22.076640713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 18:30:22.086476  449501 docker.go:307] overlay module found
	I0807 18:30:22.088797  449501 out.go:177] * Using the docker driver based on user configuration
	I0807 18:30:22.090910  449501 start.go:297] selected driver: docker
	I0807 18:30:22.090941  449501 start.go:901] validating driver "docker" against <nil>
	I0807 18:30:22.090955  449501 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 18:30:22.091566  449501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 18:30:22.142281  449501 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-07 18:30:22.133073659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 18:30:22.142451  449501 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 18:30:22.142718  449501 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:30:22.144513  449501 out.go:177] * Using Docker driver with root privileges
	I0807 18:30:22.146155  449501 cni.go:84] Creating CNI manager for ""
	I0807 18:30:22.146177  449501 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0807 18:30:22.146188  449501 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0807 18:30:22.146277  449501 start.go:340] cluster config:
	{Name:addons-553671 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-553671 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:30:22.148253  449501 out.go:177] * Starting "addons-553671" primary control-plane node in "addons-553671" cluster
	I0807 18:30:22.149750  449501 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0807 18:30:22.151456  449501 out.go:177] * Pulling base image v0.0.44-1723026928-19389 ...
	I0807 18:30:22.152903  449501 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0807 18:30:22.152950  449501 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	I0807 18:30:22.152976  449501 cache.go:56] Caching tarball of preloaded images
	I0807 18:30:22.152989  449501 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local docker daemon
	I0807 18:30:22.153057  449501 preload.go:172] Found /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 18:30:22.153068  449501 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on containerd
	I0807 18:30:22.153409  449501 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/config.json ...
	I0807 18:30:22.153437  449501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/config.json: {Name:mk45cd5a257eec4ef5ee1d1c494d263cc8f7e2ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:30:22.168258  449501 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 to local cache
	I0807 18:30:22.168462  449501 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local cache directory
	I0807 18:30:22.168485  449501 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local cache directory, skipping pull
	I0807 18:30:22.168491  449501 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 exists in cache, skipping pull
	I0807 18:30:22.168499  449501 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 as a tarball
	I0807 18:30:22.168505  449501 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 from local cache
	I0807 18:30:39.137825  449501 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 from cached tarball
	I0807 18:30:39.137865  449501 cache.go:194] Successfully downloaded all kic artifacts
	I0807 18:30:39.137911  449501 start.go:360] acquireMachinesLock for addons-553671: {Name:mkbc05694b35c583b77da2f4026bd6292a7091e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 18:30:39.138034  449501 start.go:364] duration metric: took 98.386µs to acquireMachinesLock for "addons-553671"
	I0807 18:30:39.138065  449501 start.go:93] Provisioning new machine with config: &{Name:addons-553671 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-553671 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0807 18:30:39.138162  449501 start.go:125] createHost starting for "" (driver="docker")
	I0807 18:30:39.140883  449501 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0807 18:30:39.141160  449501 start.go:159] libmachine.API.Create for "addons-553671" (driver="docker")
	I0807 18:30:39.141208  449501 client.go:168] LocalClient.Create starting
	I0807 18:30:39.141349  449501 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem
	I0807 18:30:39.457111  449501 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/cert.pem
	I0807 18:30:39.969621  449501 cli_runner.go:164] Run: docker network inspect addons-553671 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0807 18:30:39.985420  449501 cli_runner.go:211] docker network inspect addons-553671 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0807 18:30:39.985508  449501 network_create.go:284] running [docker network inspect addons-553671] to gather additional debugging logs...
	I0807 18:30:39.985528  449501 cli_runner.go:164] Run: docker network inspect addons-553671
	W0807 18:30:40.000518  449501 cli_runner.go:211] docker network inspect addons-553671 returned with exit code 1
	I0807 18:30:40.000551  449501 network_create.go:287] error running [docker network inspect addons-553671]: docker network inspect addons-553671: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-553671 not found
	I0807 18:30:40.000566  449501 network_create.go:289] output of [docker network inspect addons-553671]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-553671 not found
	
	** /stderr **
	I0807 18:30:40.000685  449501 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0807 18:30:40.032062  449501 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017a5240}
	I0807 18:30:40.032151  449501 network_create.go:124] attempt to create docker network addons-553671 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0807 18:30:40.032229  449501 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-553671 addons-553671
	I0807 18:30:40.130275  449501 network_create.go:108] docker network addons-553671 192.168.49.0/24 created
	I0807 18:30:40.130315  449501 kic.go:121] calculated static IP "192.168.49.2" for the "addons-553671" container
	I0807 18:30:40.130402  449501 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0807 18:30:40.147609  449501 cli_runner.go:164] Run: docker volume create addons-553671 --label name.minikube.sigs.k8s.io=addons-553671 --label created_by.minikube.sigs.k8s.io=true
	I0807 18:30:40.165489  449501 oci.go:103] Successfully created a docker volume addons-553671
	I0807 18:30:40.165612  449501 cli_runner.go:164] Run: docker run --rm --name addons-553671-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-553671 --entrypoint /usr/bin/test -v addons-553671:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 -d /var/lib
	I0807 18:30:42.101097  449501 cli_runner.go:217] Completed: docker run --rm --name addons-553671-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-553671 --entrypoint /usr/bin/test -v addons-553671:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 -d /var/lib: (1.935438014s)
	I0807 18:30:42.101133  449501 oci.go:107] Successfully prepared a docker volume addons-553671
	I0807 18:30:42.101177  449501 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0807 18:30:42.101201  449501 kic.go:194] Starting extracting preloaded images to volume ...
	I0807 18:30:42.101283  449501 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-553671:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0807 18:30:46.379823  449501 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-553671:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.278502262s)
	I0807 18:30:46.379859  449501 kic.go:203] duration metric: took 4.27865457s to extract preloaded images to volume ...
	W0807 18:30:46.380018  449501 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0807 18:30:46.380157  449501 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0807 18:30:46.433059  449501 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-553671 --name addons-553671 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-553671 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-553671 --network addons-553671 --ip 192.168.49.2 --volume addons-553671:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0
	I0807 18:30:46.733147  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Running}}
	I0807 18:30:46.758205  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:30:46.789520  449501 cli_runner.go:164] Run: docker exec addons-553671 stat /var/lib/dpkg/alternatives/iptables
	I0807 18:30:46.849245  449501 oci.go:144] the created container "addons-553671" has a running status.
	I0807 18:30:46.849287  449501 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa...
	I0807 18:30:48.340251  449501 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0807 18:30:48.362251  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:30:48.378450  449501 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0807 18:30:48.378470  449501 kic_runner.go:114] Args: [docker exec --privileged addons-553671 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0807 18:30:48.440408  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:30:48.458099  449501 machine.go:94] provisionDockerMachine start ...
	I0807 18:30:48.458212  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:30:48.475670  449501 main.go:141] libmachine: Using SSH client type: native
	I0807 18:30:48.475942  449501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0807 18:30:48.475951  449501 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 18:30:48.615755  449501 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-553671
	
	I0807 18:30:48.615776  449501 ubuntu.go:169] provisioning hostname "addons-553671"
	I0807 18:30:48.615840  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:30:48.632192  449501 main.go:141] libmachine: Using SSH client type: native
	I0807 18:30:48.632470  449501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0807 18:30:48.632488  449501 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-553671 && echo "addons-553671" | sudo tee /etc/hostname
	I0807 18:30:48.788511  449501 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-553671
	
	I0807 18:30:48.788605  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:30:48.805804  449501 main.go:141] libmachine: Using SSH client type: native
	I0807 18:30:48.806048  449501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0807 18:30:48.806071  449501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-553671' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-553671/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-553671' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:30:48.948421  449501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:30:48.948443  449501 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19389-443116/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-443116/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-443116/.minikube}
	I0807 18:30:48.948466  449501 ubuntu.go:177] setting up certificates
	I0807 18:30:48.948476  449501 provision.go:84] configureAuth start
	I0807 18:30:48.948537  449501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-553671
	I0807 18:30:48.966081  449501 provision.go:143] copyHostCerts
	I0807 18:30:48.966178  449501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-443116/.minikube/cert.pem (1123 bytes)
	I0807 18:30:48.966295  449501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-443116/.minikube/key.pem (1675 bytes)
	I0807 18:30:48.966354  449501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-443116/.minikube/ca.pem (1082 bytes)
	I0807 18:30:48.966403  449501 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-443116/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca-key.pem org=jenkins.addons-553671 san=[127.0.0.1 192.168.49.2 addons-553671 localhost minikube]
	I0807 18:30:49.270698  449501 provision.go:177] copyRemoteCerts
	I0807 18:30:49.270780  449501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:30:49.270822  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:30:49.287660  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:30:49.389700  449501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 18:30:49.416360  449501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:30:49.441380  449501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0807 18:30:49.466041  449501 provision.go:87] duration metric: took 517.549532ms to configureAuth
	I0807 18:30:49.466073  449501 ubuntu.go:193] setting minikube options for container-runtime
	I0807 18:30:49.466274  449501 config.go:182] Loaded profile config "addons-553671": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0807 18:30:49.466286  449501 machine.go:97] duration metric: took 1.008169831s to provisionDockerMachine
	I0807 18:30:49.466304  449501 client.go:171] duration metric: took 10.325086711s to LocalClient.Create
	I0807 18:30:49.466331  449501 start.go:167] duration metric: took 10.325173167s to libmachine.API.Create "addons-553671"
	I0807 18:30:49.466348  449501 start.go:293] postStartSetup for "addons-553671" (driver="docker")
	I0807 18:30:49.466358  449501 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:30:49.466419  449501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:30:49.466469  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:30:49.482746  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:30:49.581851  449501 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:30:49.584915  449501 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0807 18:30:49.584949  449501 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0807 18:30:49.584979  449501 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0807 18:30:49.584986  449501 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0807 18:30:49.584998  449501 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-443116/.minikube/addons for local assets ...
	I0807 18:30:49.585067  449501 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-443116/.minikube/files for local assets ...
	I0807 18:30:49.585092  449501 start.go:296] duration metric: took 118.738406ms for postStartSetup
	I0807 18:30:49.585398  449501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-553671
	I0807 18:30:49.601488  449501 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/config.json ...
	I0807 18:30:49.601780  449501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:30:49.601834  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:30:49.617478  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:30:49.713126  449501 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0807 18:30:49.717509  449501 start.go:128] duration metric: took 10.579330777s to createHost
	I0807 18:30:49.717533  449501 start.go:83] releasing machines lock for "addons-553671", held for 10.579484555s
	I0807 18:30:49.717606  449501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-553671
	I0807 18:30:49.733037  449501 ssh_runner.go:195] Run: cat /version.json
	I0807 18:30:49.733084  449501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 18:30:49.733099  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:30:49.733127  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:30:49.756618  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:30:49.772513  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:30:49.852149  449501 ssh_runner.go:195] Run: systemctl --version
	I0807 18:30:49.991844  449501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0807 18:30:49.996270  449501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0807 18:30:50.034092  449501 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0807 18:30:50.034209  449501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 18:30:50.067902  449501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0807 18:30:50.067933  449501 start.go:495] detecting cgroup driver to use...
	I0807 18:30:50.067974  449501 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0807 18:30:50.068033  449501 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0807 18:30:50.082810  449501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 18:30:50.095818  449501 docker.go:217] disabling cri-docker service (if available) ...
	I0807 18:30:50.095951  449501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 18:30:50.112520  449501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 18:30:50.128734  449501 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 18:30:50.224059  449501 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 18:30:50.321083  449501 docker.go:233] disabling docker service ...
	I0807 18:30:50.321199  449501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 18:30:50.341685  449501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 18:30:50.353357  449501 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 18:30:50.445249  449501 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 18:30:50.538271  449501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 18:30:50.550406  449501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:30:50.567766  449501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0807 18:30:50.578225  449501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 18:30:50.588598  449501 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 18:30:50.588710  449501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 18:30:50.598835  449501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 18:30:50.609434  449501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 18:30:50.619413  449501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 18:30:50.629339  449501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:30:50.638741  449501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 18:30:50.648951  449501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 18:30:50.659070  449501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 18:30:50.669547  449501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:30:50.678719  449501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:30:50.687442  449501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:30:50.769682  449501 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 18:30:50.906803  449501 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0807 18:30:50.906894  449501 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0807 18:30:50.910422  449501 start.go:563] Will wait 60s for crictl version
	I0807 18:30:50.910487  449501 ssh_runner.go:195] Run: which crictl
	I0807 18:30:50.913979  449501 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:30:50.949932  449501 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.19
	RuntimeApiVersion:  v1
	I0807 18:30:50.950082  449501 ssh_runner.go:195] Run: containerd --version
	I0807 18:30:50.974383  449501 ssh_runner.go:195] Run: containerd --version
	I0807 18:30:51.000799  449501 out.go:177] * Preparing Kubernetes v1.30.3 on containerd 1.7.19 ...
	I0807 18:30:51.002830  449501 cli_runner.go:164] Run: docker network inspect addons-553671 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0807 18:30:51.020767  449501 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0807 18:30:51.025194  449501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:30:51.037368  449501 kubeadm.go:883] updating cluster {Name:addons-553671 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-553671 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 18:30:51.037498  449501 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0807 18:30:51.037565  449501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 18:30:51.077312  449501 containerd.go:627] all images are preloaded for containerd runtime.
	I0807 18:30:51.077341  449501 containerd.go:534] Images already preloaded, skipping extraction
	I0807 18:30:51.077417  449501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 18:30:51.116695  449501 containerd.go:627] all images are preloaded for containerd runtime.
	I0807 18:30:51.116721  449501 cache_images.go:84] Images are preloaded, skipping loading
	I0807 18:30:51.116729  449501 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 containerd true true} ...
	I0807 18:30:51.116835  449501 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-553671 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-553671 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:30:51.116907  449501 ssh_runner.go:195] Run: sudo crictl info
	I0807 18:30:51.157486  449501 cni.go:84] Creating CNI manager for ""
	I0807 18:30:51.157513  449501 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0807 18:30:51.157525  449501 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 18:30:51.157557  449501 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-553671 NodeName:addons-553671 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 18:30:51.157703  449501 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-553671"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 18:30:51.157781  449501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:30:51.167853  449501 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 18:30:51.167932  449501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 18:30:51.177428  449501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0807 18:30:51.196723  449501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:30:51.215537  449501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0807 18:30:51.233959  449501 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0807 18:30:51.237536  449501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:30:51.248593  449501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:30:51.333359  449501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:30:51.349051  449501 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671 for IP: 192.168.49.2
	I0807 18:30:51.349114  449501 certs.go:194] generating shared ca certs ...
	I0807 18:30:51.349144  449501 certs.go:226] acquiring lock for ca certs: {Name:mk02e7ae9d01c8374822222c07f7572b27877c45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:30:51.349298  449501 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-443116/.minikube/ca.key
	I0807 18:30:51.785340  449501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-443116/.minikube/ca.crt ...
	I0807 18:30:51.785375  449501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/.minikube/ca.crt: {Name:mk8ef6ea79cef066ded2edb89430ff3fc1fc6e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:30:51.785572  449501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-443116/.minikube/ca.key ...
	I0807 18:30:51.785586  449501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/.minikube/ca.key: {Name:mk5602f92c3fc60db514269428ce4f9c0bfb90e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:30:51.785682  449501 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-443116/.minikube/proxy-client-ca.key
	I0807 18:30:52.380756  449501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-443116/.minikube/proxy-client-ca.crt ...
	I0807 18:30:52.380794  449501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/.minikube/proxy-client-ca.crt: {Name:mkd69f3000d45653f336f07695cfc67e6df251ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:30:52.381045  449501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-443116/.minikube/proxy-client-ca.key ...
	I0807 18:30:52.381061  449501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/.minikube/proxy-client-ca.key: {Name:mkcf720530440c0895beb8dd31180c2bcdb6c52f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:30:52.381153  449501 certs.go:256] generating profile certs ...
	I0807 18:30:52.381220  449501 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.key
	I0807 18:30:52.381240  449501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt with IP's: []
	I0807 18:30:52.701136  449501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt ...
	I0807 18:30:52.701168  449501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: {Name:mkbabab512eacc5d64e7e402622a7d87ec30aa4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:30:52.701357  449501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.key ...
	I0807 18:30:52.701370  449501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.key: {Name:mk77ed4a2bdaab05ca2a8c0a51c7a3a4bcbf5348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:30:52.701453  449501 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/apiserver.key.ec886a83
	I0807 18:30:52.701473  449501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/apiserver.crt.ec886a83 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0807 18:30:53.089807  449501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/apiserver.crt.ec886a83 ...
	I0807 18:30:53.089843  449501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/apiserver.crt.ec886a83: {Name:mkbcc5e0c953c6424967b054cba0a3a0b74dbeee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:30:53.090662  449501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/apiserver.key.ec886a83 ...
	I0807 18:30:53.090683  449501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/apiserver.key.ec886a83: {Name:mk2369141d4c62cc7142fb80a621bcf617139131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:30:53.090781  449501 certs.go:381] copying /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/apiserver.crt.ec886a83 -> /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/apiserver.crt
	I0807 18:30:53.090878  449501 certs.go:385] copying /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/apiserver.key.ec886a83 -> /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/apiserver.key
	I0807 18:30:53.090930  449501 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/proxy-client.key
	I0807 18:30:53.090952  449501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/proxy-client.crt with IP's: []
	I0807 18:30:53.646786  449501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/proxy-client.crt ...
	I0807 18:30:53.646812  449501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/proxy-client.crt: {Name:mkbac4680d7a294bf59fe4351c2bda36bc8821fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:30:53.646976  449501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/proxy-client.key ...
	I0807 18:30:53.646986  449501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/proxy-client.key: {Name:mk7683de0a035686a793ffc2efaaedb2ac44ebfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:30:53.647154  449501 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca-key.pem (1675 bytes)
	I0807 18:30:53.647189  449501 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem (1082 bytes)
	I0807 18:30:53.647216  449501 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/cert.pem (1123 bytes)
	I0807 18:30:53.647240  449501 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/key.pem (1675 bytes)
	I0807 18:30:53.647845  449501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:30:53.674598  449501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:30:53.700671  449501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:30:53.727306  449501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0807 18:30:53.751943  449501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0807 18:30:53.776089  449501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 18:30:53.800339  449501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:30:53.825878  449501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 18:30:53.850476  449501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:30:53.875131  449501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 18:30:53.894334  449501 ssh_runner.go:195] Run: openssl version
	I0807 18:30:53.899791  449501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:30:53.909500  449501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:30:53.913085  449501 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:30:53.913178  449501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:30:53.920172  449501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:30:53.929936  449501 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:30:53.933324  449501 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 18:30:53.933374  449501 kubeadm.go:392] StartCluster: {Name:addons-553671 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-553671 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:30:53.933456  449501 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0807 18:30:53.933520  449501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 18:30:53.972060  449501 cri.go:89] found id: ""
	I0807 18:30:53.972131  449501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 18:30:53.981068  449501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 18:30:53.990116  449501 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0807 18:30:53.990236  449501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 18:30:53.999422  449501 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 18:30:53.999489  449501 kubeadm.go:157] found existing configuration files:
	
	I0807 18:30:53.999548  449501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0807 18:30:54.009791  449501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 18:30:54.009870  449501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 18:30:54.020637  449501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0807 18:30:54.030293  449501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 18:30:54.030468  449501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 18:30:54.040438  449501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0807 18:30:54.050389  449501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 18:30:54.050502  449501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 18:30:54.060445  449501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0807 18:30:54.070084  449501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 18:30:54.070181  449501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 18:30:54.079433  449501 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0807 18:30:54.126202  449501 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0807 18:30:54.126540  449501 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 18:30:54.168974  449501 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0807 18:30:54.169048  449501 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-aws
	I0807 18:30:54.169090  449501 kubeadm.go:310] OS: Linux
	I0807 18:30:54.169139  449501 kubeadm.go:310] CGROUPS_CPU: enabled
	I0807 18:30:54.169190  449501 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0807 18:30:54.169240  449501 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0807 18:30:54.169289  449501 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0807 18:30:54.169338  449501 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0807 18:30:54.169387  449501 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0807 18:30:54.169441  449501 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0807 18:30:54.169491  449501 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0807 18:30:54.169539  449501 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0807 18:30:54.239374  449501 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 18:30:54.239485  449501 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 18:30:54.239582  449501 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 18:30:54.501685  449501 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 18:30:54.504786  449501 out.go:204]   - Generating certificates and keys ...
	I0807 18:30:54.504879  449501 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 18:30:54.504943  449501 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 18:30:54.829485  449501 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0807 18:30:55.069469  449501 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0807 18:30:56.314326  449501 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0807 18:30:56.594023  449501 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0807 18:30:57.225232  449501 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0807 18:30:57.225431  449501 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-553671 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0807 18:30:57.520236  449501 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0807 18:30:57.520729  449501 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-553671 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0807 18:30:57.964826  449501 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0807 18:30:59.635261  449501 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0807 18:30:59.919169  449501 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0807 18:30:59.919395  449501 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 18:31:00.307033  449501 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 18:31:00.862406  449501 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0807 18:31:01.160219  449501 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 18:31:01.441573  449501 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 18:31:01.949483  449501 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 18:31:01.956812  449501 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 18:31:01.959189  449501 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 18:31:01.961564  449501 out.go:204]   - Booting up control plane ...
	I0807 18:31:01.961672  449501 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 18:31:01.961754  449501 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 18:31:01.963124  449501 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 18:31:01.980810  449501 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 18:31:01.982016  449501 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 18:31:01.982087  449501 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 18:31:02.094260  449501 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0807 18:31:02.094350  449501 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0807 18:31:03.595362  449501 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501189857s
	I0807 18:31:03.595454  449501 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0807 18:31:09.596835  449501 kubeadm.go:310] [api-check] The API server is healthy after 6.001465077s
	I0807 18:31:09.616686  449501 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 18:31:09.634591  449501 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 18:31:09.659656  449501 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0807 18:31:09.659860  449501 kubeadm.go:310] [mark-control-plane] Marking the node addons-553671 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 18:31:09.670606  449501 kubeadm.go:310] [bootstrap-token] Using token: orpx5n.vi78h7peeowvwp9w
	I0807 18:31:09.672326  449501 out.go:204]   - Configuring RBAC rules ...
	I0807 18:31:09.672479  449501 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 18:31:09.677887  449501 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 18:31:09.686570  449501 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 18:31:09.690850  449501 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 18:31:09.696547  449501 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 18:31:09.701727  449501 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 18:31:10.004620  449501 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 18:31:10.467454  449501 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0807 18:31:11.003326  449501 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0807 18:31:11.004727  449501 kubeadm.go:310] 
	I0807 18:31:11.004810  449501 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0807 18:31:11.004824  449501 kubeadm.go:310] 
	I0807 18:31:11.004900  449501 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0807 18:31:11.004908  449501 kubeadm.go:310] 
	I0807 18:31:11.004933  449501 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0807 18:31:11.004994  449501 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 18:31:11.005046  449501 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 18:31:11.005054  449501 kubeadm.go:310] 
	I0807 18:31:11.005106  449501 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0807 18:31:11.005115  449501 kubeadm.go:310] 
	I0807 18:31:11.005160  449501 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 18:31:11.005170  449501 kubeadm.go:310] 
	I0807 18:31:11.005220  449501 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0807 18:31:11.005294  449501 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 18:31:11.005366  449501 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 18:31:11.005386  449501 kubeadm.go:310] 
	I0807 18:31:11.005615  449501 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0807 18:31:11.005762  449501 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0807 18:31:11.005789  449501 kubeadm.go:310] 
	I0807 18:31:11.005911  449501 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token orpx5n.vi78h7peeowvwp9w \
	I0807 18:31:11.006058  449501 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2ecd380c03ffc7ee3d876bfbe57e427f08ff57b40d645766e1b54c33fee20bdf \
	I0807 18:31:11.006111  449501 kubeadm.go:310] 	--control-plane 
	I0807 18:31:11.006129  449501 kubeadm.go:310] 
	I0807 18:31:11.006258  449501 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0807 18:31:11.006277  449501 kubeadm.go:310] 
	I0807 18:31:11.006405  449501 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token orpx5n.vi78h7peeowvwp9w \
	I0807 18:31:11.006549  449501 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2ecd380c03ffc7ee3d876bfbe57e427f08ff57b40d645766e1b54c33fee20bdf 
	I0807 18:31:11.013906  449501 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-aws\n", err: exit status 1
	I0807 18:31:11.014052  449501 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 18:31:11.014073  449501 cni.go:84] Creating CNI manager for ""
	I0807 18:31:11.014082  449501 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0807 18:31:11.017258  449501 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0807 18:31:11.019912  449501 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0807 18:31:11.024225  449501 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0807 18:31:11.024246  449501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0807 18:31:11.044659  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0807 18:31:11.329836  449501 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 18:31:11.329969  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:11.330054  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-553671 minikube.k8s.io/updated_at=2024_08_07T18_31_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=addons-553671 minikube.k8s.io/primary=true
	I0807 18:31:11.550211  449501 ops.go:34] apiserver oom_adj: -16
	I0807 18:31:11.550302  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:12.051413  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:12.551113  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:13.051284  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:13.551411  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:14.050828  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:14.550401  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:15.050529  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:15.550419  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:16.050489  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:16.550677  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:17.051508  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:17.550850  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:18.050524  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:18.551234  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:19.050414  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:19.551451  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:20.051290  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:20.551213  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:21.050971  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:21.551404  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:22.050948  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:22.550925  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:23.050698  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:23.550431  449501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:31:23.687306  449501 kubeadm.go:1113] duration metric: took 12.357382304s to wait for elevateKubeSystemPrivileges
	I0807 18:31:23.687334  449501 kubeadm.go:394] duration metric: took 29.753963037s to StartCluster
	I0807 18:31:23.687352  449501 settings.go:142] acquiring lock: {Name:mkf40f234ddca073ac593f3a60c7a02738b6a34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:31:23.687464  449501 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19389-443116/kubeconfig
	I0807 18:31:23.687827  449501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/kubeconfig: {Name:mk6f3c27977886608fc27ecd6788b53bded2f437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:31:23.688006  449501 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0807 18:31:23.688144  449501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0807 18:31:23.688430  449501 config.go:182] Loaded profile config "addons-553671": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0807 18:31:23.688464  449501 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0807 18:31:23.688539  449501 addons.go:69] Setting yakd=true in profile "addons-553671"
	I0807 18:31:23.688561  449501 addons.go:234] Setting addon yakd=true in "addons-553671"
	I0807 18:31:23.688588  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:23.689044  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.689368  449501 addons.go:69] Setting inspektor-gadget=true in profile "addons-553671"
	I0807 18:31:23.689398  449501 addons.go:234] Setting addon inspektor-gadget=true in "addons-553671"
	I0807 18:31:23.689433  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:23.689826  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.690058  449501 addons.go:69] Setting metrics-server=true in profile "addons-553671"
	I0807 18:31:23.690085  449501 addons.go:234] Setting addon metrics-server=true in "addons-553671"
	I0807 18:31:23.690110  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:23.690471  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.692640  449501 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-553671"
	I0807 18:31:23.692680  449501 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-553671"
	I0807 18:31:23.692713  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:23.693121  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.693288  449501 addons.go:69] Setting cloud-spanner=true in profile "addons-553671"
	I0807 18:31:23.693327  449501 addons.go:234] Setting addon cloud-spanner=true in "addons-553671"
	I0807 18:31:23.693462  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:23.694587  449501 addons.go:69] Setting registry=true in profile "addons-553671"
	I0807 18:31:23.694620  449501 addons.go:234] Setting addon registry=true in "addons-553671"
	I0807 18:31:23.694647  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:23.695682  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.698926  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.705603  449501 addons.go:69] Setting storage-provisioner=true in profile "addons-553671"
	I0807 18:31:23.705654  449501 addons.go:234] Setting addon storage-provisioner=true in "addons-553671"
	I0807 18:31:23.705697  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:23.706141  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.706648  449501 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-553671"
	I0807 18:31:23.706752  449501 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-553671"
	I0807 18:31:23.706817  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:23.707319  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.727820  449501 addons.go:69] Setting default-storageclass=true in profile "addons-553671"
	I0807 18:31:23.727869  449501 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-553671"
	I0807 18:31:23.729295  449501 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-553671"
	I0807 18:31:23.729342  449501 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-553671"
	I0807 18:31:23.729678  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.730076  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.734200  449501 addons.go:69] Setting volcano=true in profile "addons-553671"
	I0807 18:31:23.734252  449501 addons.go:234] Setting addon volcano=true in "addons-553671"
	I0807 18:31:23.734291  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:23.734820  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.737193  449501 addons.go:69] Setting gcp-auth=true in profile "addons-553671"
	I0807 18:31:23.750957  449501 mustload.go:65] Loading cluster: addons-553671
	I0807 18:31:23.737675  449501 addons.go:69] Setting ingress=true in profile "addons-553671"
	I0807 18:31:23.764623  449501 addons.go:234] Setting addon ingress=true in "addons-553671"
	I0807 18:31:23.764675  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:23.737686  449501 addons.go:69] Setting ingress-dns=true in profile "addons-553671"
	I0807 18:31:23.779731  449501 addons.go:234] Setting addon ingress-dns=true in "addons-553671"
	I0807 18:31:23.779803  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:23.784632  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.737742  449501 out.go:177] * Verifying Kubernetes components...
	I0807 18:31:23.748598  449501 addons.go:69] Setting volumesnapshots=true in profile "addons-553671"
	I0807 18:31:23.803063  449501 addons.go:234] Setting addon volumesnapshots=true in "addons-553671"
	I0807 18:31:23.803114  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:23.803811  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.808937  449501 config.go:182] Loaded profile config "addons-553671": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0807 18:31:23.809393  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.820490  449501 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0807 18:31:23.832722  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.838817  449501 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0807 18:31:23.838837  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0807 18:31:23.838902  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:23.866000  449501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:31:23.867528  449501 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-553671"
	I0807 18:31:23.875955  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:23.876481  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.906598  449501 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0807 18:31:23.907811  449501 addons.go:234] Setting addon default-storageclass=true in "addons-553671"
	I0807 18:31:23.907895  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:23.908441  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:23.911493  449501 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0807 18:31:23.911552  449501 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0807 18:31:23.911637  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:23.934387  449501 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0807 18:31:23.939582  449501 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0807 18:31:23.939654  449501 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0807 18:31:23.939757  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:23.939997  449501 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0807 18:31:23.946622  449501 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0807 18:31:23.946690  449501 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0807 18:31:23.946813  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:23.962998  449501 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 18:31:23.965069  449501 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 18:31:23.965089  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 18:31:23.965155  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:23.970771  449501 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0807 18:31:23.970926  449501 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0807 18:31:23.970954  449501 out.go:177]   - Using image docker.io/registry:2.8.3
	I0807 18:31:23.970964  449501 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0807 18:31:23.997338  449501 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0807 18:31:23.999624  449501 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0807 18:31:23.999772  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0807 18:31:23.999975  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:24.001013  449501 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0807 18:31:24.001196  449501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0807 18:31:24.007713  449501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0807 18:31:24.009863  449501 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0807 18:31:24.009890  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0807 18:31:24.009981  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:24.017689  449501 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0807 18:31:24.024547  449501 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0807 18:31:24.024572  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0807 18:31:24.024647  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:24.027854  449501 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0807 18:31:24.032124  449501 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0807 18:31:24.032156  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0807 18:31:24.032236  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:24.044045  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:24.048516  449501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0807 18:31:24.052923  449501 out.go:177]   - Using image docker.io/busybox:stable
	I0807 18:31:24.054659  449501 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0807 18:31:24.054728  449501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0807 18:31:24.058620  449501 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0807 18:31:24.058645  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0807 18:31:24.058736  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:24.062414  449501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0807 18:31:24.064227  449501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0807 18:31:24.066326  449501 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0807 18:31:24.068666  449501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0807 18:31:24.070546  449501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0807 18:31:24.071302  449501 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0807 18:31:24.071335  449501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0807 18:31:24.071409  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:24.080694  449501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0807 18:31:24.082801  449501 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0807 18:31:24.082823  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0807 18:31:24.082895  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:24.098304  449501 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0807 18:31:24.101265  449501 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0807 18:31:24.101295  449501 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0807 18:31:24.101374  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:24.122555  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:24.179457  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:24.196514  449501 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 18:31:24.196546  449501 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 18:31:24.196618  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:24.251768  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:24.256891  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:24.258745  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:24.290663  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:24.305517  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:24.321068  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:24.321761  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:24.322091  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:24.329018  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:24.341182  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:24.346259  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:24.351366  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	W0807 18:31:24.370425  449501 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0807 18:31:24.370456  449501 retry.go:31] will retry after 256.778168ms: ssh: handshake failed: EOF
	I0807 18:31:24.834706  449501 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.146532427s)
	I0807 18:31:24.834909  449501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0807 18:31:24.835041  449501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:31:24.895842  449501 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0807 18:31:24.895881  449501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0807 18:31:24.912369  449501 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0807 18:31:24.912460  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0807 18:31:24.918613  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0807 18:31:24.967442  449501 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0807 18:31:24.967543  449501 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0807 18:31:25.039988  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 18:31:25.043394  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 18:31:25.081781  449501 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0807 18:31:25.081857  449501 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0807 18:31:25.084112  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0807 18:31:25.105434  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0807 18:31:25.130020  449501 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0807 18:31:25.130089  449501 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0807 18:31:25.144991  449501 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0807 18:31:25.145071  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0807 18:31:25.166865  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0807 18:31:25.167972  449501 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0807 18:31:25.168039  449501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0807 18:31:25.181912  449501 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0807 18:31:25.181987  449501 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0807 18:31:25.192701  449501 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0807 18:31:25.192948  449501 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0807 18:31:25.192918  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0807 18:31:25.229736  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0807 18:31:25.270545  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0807 18:31:25.494272  449501 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0807 18:31:25.494300  449501 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0807 18:31:25.511451  449501 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0807 18:31:25.511485  449501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0807 18:31:25.515145  449501 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0807 18:31:25.515171  449501 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0807 18:31:25.594428  449501 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0807 18:31:25.594457  449501 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0807 18:31:25.624859  449501 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0807 18:31:25.624887  449501 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0807 18:31:25.754285  449501 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0807 18:31:25.754310  449501 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0807 18:31:25.763116  449501 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0807 18:31:25.763144  449501 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0807 18:31:25.767699  449501 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0807 18:31:25.767726  449501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0807 18:31:25.933404  449501 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0807 18:31:25.933431  449501 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0807 18:31:26.021491  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0807 18:31:26.136715  449501 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0807 18:31:26.136743  449501 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0807 18:31:26.154241  449501 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0807 18:31:26.154269  449501 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0807 18:31:26.228652  449501 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0807 18:31:26.228695  449501 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0807 18:31:26.422876  449501 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0807 18:31:26.422899  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0807 18:31:26.532470  449501 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0807 18:31:26.532513  449501 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0807 18:31:26.694763  449501 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0807 18:31:26.694787  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0807 18:31:26.786570  449501 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0807 18:31:26.786646  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0807 18:31:26.905703  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0807 18:31:26.955818  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0807 18:31:26.997135  449501 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0807 18:31:26.997172  449501 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0807 18:31:27.191626  449501 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0807 18:31:27.191649  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0807 18:31:27.200458  449501 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0807 18:31:27.200499  449501 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0807 18:31:27.289221  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.370512451s)
	I0807 18:31:27.289270  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.249207306s)
	I0807 18:31:27.289144  449501 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.454053562s)
	I0807 18:31:27.289536  449501 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.45458873s)
	I0807 18:31:27.289555  449501 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0807 18:31:27.290756  449501 node_ready.go:35] waiting up to 6m0s for node "addons-553671" to be "Ready" ...
	I0807 18:31:27.296004  449501 node_ready.go:49] node "addons-553671" has status "Ready":"True"
	I0807 18:31:27.296033  449501 node_ready.go:38] duration metric: took 5.248901ms for node "addons-553671" to be "Ready" ...
	I0807 18:31:27.296042  449501 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:31:27.310024  449501 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5n8z9" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:27.613865  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0807 18:31:27.716109  449501 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0807 18:31:27.716142  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0807 18:31:27.800171  449501 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-553671" context rescaled to 1 replicas
	I0807 18:31:28.215128  449501 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0807 18:31:28.215152  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0807 18:31:28.366793  449501 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-5n8z9" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-5n8z9" not found
	I0807 18:31:28.366834  449501 pod_ready.go:81] duration metric: took 1.056772478s for pod "coredns-7db6d8ff4d-5n8z9" in "kube-system" namespace to be "Ready" ...
	E0807 18:31:28.366847  449501 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-5n8z9" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-5n8z9" not found
	I0807 18:31:28.366856  449501 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xnbjz" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:28.657257  449501 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0807 18:31:28.657298  449501 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0807 18:31:28.776685  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.733211644s)
	I0807 18:31:28.776801  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.692604314s)
	I0807 18:31:29.008786  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0807 18:31:30.391280  449501 pod_ready.go:102] pod "coredns-7db6d8ff4d-xnbjz" in "kube-system" namespace has status "Ready":"False"
	I0807 18:31:31.350838  449501 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0807 18:31:31.350994  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:31.398195  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:31.806523  449501 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0807 18:31:31.990378  449501 addons.go:234] Setting addon gcp-auth=true in "addons-553671"
	I0807 18:31:31.990482  449501 host.go:66] Checking if "addons-553671" exists ...
	I0807 18:31:31.991018  449501 cli_runner.go:164] Run: docker container inspect addons-553671 --format={{.State.Status}}
	I0807 18:31:32.014300  449501 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0807 18:31:32.014366  449501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-553671
	I0807 18:31:32.050162  449501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/addons-553671/id_rsa Username:docker}
	I0807 18:31:32.875301  449501 pod_ready.go:102] pod "coredns-7db6d8ff4d-xnbjz" in "kube-system" namespace has status "Ready":"False"
	I0807 18:31:34.390268  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.284739803s)
	I0807 18:31:34.390378  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.223440673s)
	I0807 18:31:34.390400  449501 addons.go:475] Verifying addon ingress=true in "addons-553671"
	I0807 18:31:34.390839  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.197792781s)
	I0807 18:31:34.390905  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.161144729s)
	I0807 18:31:34.391009  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.120439087s)
	I0807 18:31:34.391018  449501 addons.go:475] Verifying addon registry=true in "addons-553671"
	I0807 18:31:34.391229  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.369702322s)
	I0807 18:31:34.391256  449501 addons.go:475] Verifying addon metrics-server=true in "addons-553671"
	I0807 18:31:34.391293  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.485510787s)
	I0807 18:31:34.391590  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.43572712s)
	W0807 18:31:34.391626  449501 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0807 18:31:34.391646  449501 retry.go:31] will retry after 166.996742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0807 18:31:34.391717  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.777821579s)
	I0807 18:31:34.394759  449501 out.go:177] * Verifying ingress addon...
	I0807 18:31:34.397258  449501 out.go:177] * Verifying registry addon...
	I0807 18:31:34.397258  449501 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-553671 service yakd-dashboard -n yakd-dashboard
	
	I0807 18:31:34.402202  449501 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0807 18:31:34.403638  449501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0807 18:31:34.419676  449501 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0807 18:31:34.419700  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:34.421858  449501 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0807 18:31:34.421931  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:34.559268  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0807 18:31:34.882887  449501 pod_ready.go:102] pod "coredns-7db6d8ff4d-xnbjz" in "kube-system" namespace has status "Ready":"False"
	I0807 18:31:34.907183  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:34.910745  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:35.168557  449501 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.154187428s)
	I0807 18:31:35.168809  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.159970828s)
	I0807 18:31:35.168959  449501 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-553671"
	I0807 18:31:35.170953  449501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0807 18:31:35.172180  449501 out.go:177] * Verifying csi-hostpath-driver addon...
	I0807 18:31:35.173512  449501 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0807 18:31:35.174585  449501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0807 18:31:35.176017  449501 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0807 18:31:35.176086  449501 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0807 18:31:35.181990  449501 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0807 18:31:35.182019  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:35.272751  449501 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0807 18:31:35.272774  449501 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0807 18:31:35.347316  449501 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0807 18:31:35.347384  449501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0807 18:31:35.416527  449501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0807 18:31:35.429137  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:35.431473  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:35.680692  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:35.907018  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:35.909376  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:36.182268  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:36.193270  449501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.633951869s)
	I0807 18:31:36.323607  449501 addons.go:475] Verifying addon gcp-auth=true in "addons-553671"
	I0807 18:31:36.325685  449501 out.go:177] * Verifying gcp-auth addon...
	I0807 18:31:36.328062  449501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0807 18:31:36.333480  449501 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0807 18:31:36.408649  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:36.411609  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:36.680427  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:36.909832  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:36.910741  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:37.181188  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:37.373941  449501 pod_ready.go:102] pod "coredns-7db6d8ff4d-xnbjz" in "kube-system" namespace has status "Ready":"False"
	I0807 18:31:37.407168  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:37.410418  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:37.680454  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:37.908002  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:37.909593  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:38.180976  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:38.409773  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:38.411781  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:38.680941  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:38.910457  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:38.918252  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:39.181486  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:39.380923  449501 pod_ready.go:102] pod "coredns-7db6d8ff4d-xnbjz" in "kube-system" namespace has status "Ready":"False"
	I0807 18:31:39.406809  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:39.408650  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:39.689976  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:39.910682  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:39.911953  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:40.180733  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:40.410746  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:40.411947  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:40.680319  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:40.910507  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:40.912833  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:41.182416  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:41.408855  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:41.419385  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:41.682133  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:41.876727  449501 pod_ready.go:102] pod "coredns-7db6d8ff4d-xnbjz" in "kube-system" namespace has status "Ready":"False"
	I0807 18:31:41.907251  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:41.910526  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:42.182655  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:42.409648  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:42.410826  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:42.680414  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:42.910549  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:42.910685  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:43.180424  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:43.408853  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:43.411537  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:43.681129  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:43.907449  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:43.910342  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:44.180887  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:44.373858  449501 pod_ready.go:102] pod "coredns-7db6d8ff4d-xnbjz" in "kube-system" namespace has status "Ready":"False"
	I0807 18:31:44.409700  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:44.413710  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:44.681219  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:44.911102  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:44.912061  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:45.184841  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:45.409334  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:45.412546  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:45.682004  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:45.908588  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:45.911443  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:46.180067  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:46.374491  449501 pod_ready.go:102] pod "coredns-7db6d8ff4d-xnbjz" in "kube-system" namespace has status "Ready":"False"
	I0807 18:31:46.408648  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:46.410359  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:46.680810  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:46.907392  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:46.910405  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:47.180235  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:47.408190  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:47.411092  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:47.682031  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:47.876139  449501 pod_ready.go:92] pod "coredns-7db6d8ff4d-xnbjz" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:47.876165  449501 pod_ready.go:81] duration metric: took 19.509297381s for pod "coredns-7db6d8ff4d-xnbjz" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:47.876177  449501 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-553671" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:47.887005  449501 pod_ready.go:92] pod "etcd-addons-553671" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:47.887032  449501 pod_ready.go:81] duration metric: took 10.847154ms for pod "etcd-addons-553671" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:47.887047  449501 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-553671" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:47.895054  449501 pod_ready.go:92] pod "kube-apiserver-addons-553671" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:47.895080  449501 pod_ready.go:81] duration metric: took 8.023195ms for pod "kube-apiserver-addons-553671" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:47.895093  449501 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-553671" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:47.908795  449501 pod_ready.go:92] pod "kube-controller-manager-addons-553671" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:47.908824  449501 pod_ready.go:81] duration metric: took 13.722491ms for pod "kube-controller-manager-addons-553671" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:47.908835  449501 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2kmrh" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:47.909556  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:47.911650  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:47.916099  449501 pod_ready.go:92] pod "kube-proxy-2kmrh" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:47.916125  449501 pod_ready.go:81] duration metric: took 7.281779ms for pod "kube-proxy-2kmrh" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:47.916159  449501 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-553671" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.180874  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:48.271439  449501 pod_ready.go:92] pod "kube-scheduler-addons-553671" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:48.271467  449501 pod_ready.go:81] duration metric: took 355.293266ms for pod "kube-scheduler-addons-553671" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.271479  449501 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xf5g4" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.410578  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:48.413631  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:48.670195  449501 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-xf5g4" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:48.670225  449501 pod_ready.go:81] duration metric: took 398.738013ms for pod "nvidia-device-plugin-daemonset-xf5g4" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.670237  449501 pod_ready.go:38] duration metric: took 21.374182489s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:31:48.670252  449501 api_server.go:52] waiting for apiserver process to appear ...
	I0807 18:31:48.670321  449501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:31:48.682290  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:48.687656  449501 api_server.go:72] duration metric: took 24.99962012s to wait for apiserver process to appear ...
	I0807 18:31:48.687684  449501 api_server.go:88] waiting for apiserver healthz status ...
	I0807 18:31:48.687704  449501 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0807 18:31:48.698232  449501 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0807 18:31:48.699263  449501 api_server.go:141] control plane version: v1.30.3
	I0807 18:31:48.699287  449501 api_server.go:131] duration metric: took 11.596423ms to wait for apiserver health ...
	I0807 18:31:48.699296  449501 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 18:31:48.879539  449501 system_pods.go:59] 18 kube-system pods found
	I0807 18:31:48.879574  449501 system_pods.go:61] "coredns-7db6d8ff4d-xnbjz" [bdd7f119-a55c-48a2-a3b4-baef9ff1cdf5] Running
	I0807 18:31:48.879583  449501 system_pods.go:61] "csi-hostpath-attacher-0" [dc1ec60d-420b-49b2-bd3a-c7c2b782d7e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0807 18:31:48.879591  449501 system_pods.go:61] "csi-hostpath-resizer-0" [c098c5fe-7aab-45e9-b514-e53f2013fa48] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0807 18:31:48.879600  449501 system_pods.go:61] "csi-hostpathplugin-jhs59" [58592342-b175-4065-beb8-a2e3f9774085] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0807 18:31:48.879606  449501 system_pods.go:61] "etcd-addons-553671" [10c5e8ce-6e86-4ff4-b417-39c566fc0f87] Running
	I0807 18:31:48.879610  449501 system_pods.go:61] "kindnet-zhdtg" [f4817054-4aa4-42aa-b90f-17edadaa1304] Running
	I0807 18:31:48.879615  449501 system_pods.go:61] "kube-apiserver-addons-553671" [333373ce-210c-483e-9e1f-810394414597] Running
	I0807 18:31:48.879634  449501 system_pods.go:61] "kube-controller-manager-addons-553671" [70e494f6-5db4-43c6-b54c-609da112fe4c] Running
	I0807 18:31:48.879644  449501 system_pods.go:61] "kube-ingress-dns-minikube" [024b986c-b65e-45d4-a62f-8795f3fcea8f] Running
	I0807 18:31:48.879648  449501 system_pods.go:61] "kube-proxy-2kmrh" [909ce8c0-db3a-40dd-b7f7-7fa6c6286945] Running
	I0807 18:31:48.879651  449501 system_pods.go:61] "kube-scheduler-addons-553671" [3e843571-793a-426b-8349-e680a0bac2e6] Running
	I0807 18:31:48.879657  449501 system_pods.go:61] "metrics-server-c59844bb4-tkfkt" [13342b20-1c93-45d1-8e45-d50e8aeec659] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0807 18:31:48.879661  449501 system_pods.go:61] "nvidia-device-plugin-daemonset-xf5g4" [ee3238cf-e734-4685-8582-6e77f90d5f77] Running
	I0807 18:31:48.879668  449501 system_pods.go:61] "registry-698f998955-4rlp5" [1c71360f-b606-4ccd-a70a-f81190028951] Running
	I0807 18:31:48.879673  449501 system_pods.go:61] "registry-proxy-t8rmr" [0b3865c3-bbbc-4aa1-9b36-1f77fd0af331] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0807 18:31:48.879685  449501 system_pods.go:61] "snapshot-controller-745499f584-85qnx" [3ca82a8c-9f6b-43d8-9a2c-f84bbce2f445] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0807 18:31:48.879692  449501 system_pods.go:61] "snapshot-controller-745499f584-gkxc2" [0f8b5abf-4dcb-48a2-ac67-1cd05dae4a5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0807 18:31:48.879698  449501 system_pods.go:61] "storage-provisioner" [30b2d0bc-d072-46ac-a6b1-58c228792fa1] Running
	I0807 18:31:48.879706  449501 system_pods.go:74] duration metric: took 180.403832ms to wait for pod list to return data ...
	I0807 18:31:48.879714  449501 default_sa.go:34] waiting for default service account to be created ...
	I0807 18:31:48.906744  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:48.908050  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:49.069683  449501 default_sa.go:45] found service account: "default"
	I0807 18:31:49.069715  449501 default_sa.go:55] duration metric: took 189.990778ms for default service account to be created ...
	I0807 18:31:49.069727  449501 system_pods.go:116] waiting for k8s-apps to be running ...
	I0807 18:31:49.179891  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:49.279085  449501 system_pods.go:86] 18 kube-system pods found
	I0807 18:31:49.279167  449501 system_pods.go:89] "coredns-7db6d8ff4d-xnbjz" [bdd7f119-a55c-48a2-a3b4-baef9ff1cdf5] Running
	I0807 18:31:49.279195  449501 system_pods.go:89] "csi-hostpath-attacher-0" [dc1ec60d-420b-49b2-bd3a-c7c2b782d7e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0807 18:31:49.279226  449501 system_pods.go:89] "csi-hostpath-resizer-0" [c098c5fe-7aab-45e9-b514-e53f2013fa48] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0807 18:31:49.279249  449501 system_pods.go:89] "csi-hostpathplugin-jhs59" [58592342-b175-4065-beb8-a2e3f9774085] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0807 18:31:49.279269  449501 system_pods.go:89] "etcd-addons-553671" [10c5e8ce-6e86-4ff4-b417-39c566fc0f87] Running
	I0807 18:31:49.279290  449501 system_pods.go:89] "kindnet-zhdtg" [f4817054-4aa4-42aa-b90f-17edadaa1304] Running
	I0807 18:31:49.279317  449501 system_pods.go:89] "kube-apiserver-addons-553671" [333373ce-210c-483e-9e1f-810394414597] Running
	I0807 18:31:49.279336  449501 system_pods.go:89] "kube-controller-manager-addons-553671" [70e494f6-5db4-43c6-b54c-609da112fe4c] Running
	I0807 18:31:49.279355  449501 system_pods.go:89] "kube-ingress-dns-minikube" [024b986c-b65e-45d4-a62f-8795f3fcea8f] Running
	I0807 18:31:49.279376  449501 system_pods.go:89] "kube-proxy-2kmrh" [909ce8c0-db3a-40dd-b7f7-7fa6c6286945] Running
	I0807 18:31:49.279404  449501 system_pods.go:89] "kube-scheduler-addons-553671" [3e843571-793a-426b-8349-e680a0bac2e6] Running
	I0807 18:31:49.279426  449501 system_pods.go:89] "metrics-server-c59844bb4-tkfkt" [13342b20-1c93-45d1-8e45-d50e8aeec659] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0807 18:31:49.279444  449501 system_pods.go:89] "nvidia-device-plugin-daemonset-xf5g4" [ee3238cf-e734-4685-8582-6e77f90d5f77] Running
	I0807 18:31:49.279464  449501 system_pods.go:89] "registry-698f998955-4rlp5" [1c71360f-b606-4ccd-a70a-f81190028951] Running
	I0807 18:31:49.279484  449501 system_pods.go:89] "registry-proxy-t8rmr" [0b3865c3-bbbc-4aa1-9b36-1f77fd0af331] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0807 18:31:49.279508  449501 system_pods.go:89] "snapshot-controller-745499f584-85qnx" [3ca82a8c-9f6b-43d8-9a2c-f84bbce2f445] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0807 18:31:49.279531  449501 system_pods.go:89] "snapshot-controller-745499f584-gkxc2" [0f8b5abf-4dcb-48a2-ac67-1cd05dae4a5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0807 18:31:49.279572  449501 system_pods.go:89] "storage-provisioner" [30b2d0bc-d072-46ac-a6b1-58c228792fa1] Running
	I0807 18:31:49.279596  449501 system_pods.go:126] duration metric: took 209.861501ms to wait for k8s-apps to be running ...
	I0807 18:31:49.279627  449501 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 18:31:49.279699  449501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:31:49.292110  449501 system_svc.go:56] duration metric: took 12.4719ms WaitForService to wait for kubelet
	I0807 18:31:49.292140  449501 kubeadm.go:582] duration metric: took 25.604109884s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:31:49.292175  449501 node_conditions.go:102] verifying NodePressure condition ...
	I0807 18:31:49.409264  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:49.409685  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:49.470946  449501 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0807 18:31:49.470987  449501 node_conditions.go:123] node cpu capacity is 2
	I0807 18:31:49.471000  449501 node_conditions.go:105] duration metric: took 178.818454ms to run NodePressure ...
	I0807 18:31:49.471026  449501 start.go:241] waiting for startup goroutines ...
	I0807 18:31:49.681157  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:49.907162  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:49.908269  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:50.184485  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:50.413912  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:50.416593  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:50.680532  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:50.907791  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:50.910208  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:51.189604  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:51.414053  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:51.417314  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:51.681148  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:51.907388  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:51.908672  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:52.181317  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:52.409325  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:52.416186  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:52.684814  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:52.907577  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:52.908777  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:53.183545  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:53.412242  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:53.414340  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:53.686349  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:53.907485  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:53.916505  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:54.180492  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:54.410004  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:54.411685  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:54.681790  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:54.907205  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:54.910976  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:55.181029  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:55.407691  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:55.410674  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:55.682267  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:55.910199  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:55.911442  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:56.180408  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:56.414750  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 18:31:56.415312  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:56.682620  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:56.908457  449501 kapi.go:107] duration metric: took 22.504816983s to wait for kubernetes.io/minikube-addons=registry ...
	I0807 18:31:56.909937  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:57.180738  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:57.407321  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:57.681580  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:57.910960  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:58.181316  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:58.407106  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:58.680874  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:58.907261  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:59.181315  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:59.407152  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:31:59.680675  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:31:59.906217  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:00.186896  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:00.431303  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:00.682468  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:00.907413  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:01.180032  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:01.406922  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:01.683083  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:01.906830  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:02.183392  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:02.410742  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:02.684220  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:02.922347  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:03.181693  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:03.407042  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:03.681366  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:03.909140  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:04.181842  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:04.406996  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:04.683021  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:04.910664  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:05.181289  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:05.406820  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:05.680811  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:05.910841  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:06.181157  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:06.407455  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:06.681367  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:06.907570  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:07.180279  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:07.406946  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:07.680379  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:07.906709  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:08.185509  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:08.408102  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:08.680793  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:08.908303  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:09.181181  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:09.406791  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:09.681032  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:09.907193  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:10.181665  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:10.406947  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:10.681996  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:10.908190  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:11.181649  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:11.407753  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:11.680117  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:11.908273  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:12.180570  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:12.407806  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:12.681599  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:12.907348  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:13.181714  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:13.417258  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:13.683666  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:13.907691  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:14.180867  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:14.408911  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:14.681375  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:14.907454  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:15.183519  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:15.407717  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:15.680422  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:15.907681  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:16.180903  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:16.407106  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:16.683302  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:16.907920  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:17.181499  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:17.407060  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:17.681217  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:17.907092  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:18.180736  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:18.406789  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:18.680847  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:18.907098  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:19.180686  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:19.407588  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:19.680210  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:19.906924  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:20.181173  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:20.407553  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:20.680219  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:20.908963  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:21.224546  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:21.407497  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:21.680309  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:21.909037  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:22.180291  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:22.407802  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:22.680089  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:22.907004  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:23.180802  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:23.407803  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:23.681018  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:23.907245  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:24.181350  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:24.407099  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:24.682261  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:24.907254  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:25.180785  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:25.406400  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:25.680651  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:25.907172  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:26.180778  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:26.407517  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:26.685474  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:26.907299  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:27.181004  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:27.406898  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:27.680590  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:27.907523  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:28.180420  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:28.406815  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:28.680506  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 18:32:28.907110  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:29.180484  449501 kapi.go:107] duration metric: took 54.005894811s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0807 18:32:29.407346  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:29.906470  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:30.407761  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:30.907883  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:31.407292  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:31.907056  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:32.406933  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:32.906627  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:33.406834  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:33.907156  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:34.408792  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:34.906441  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:35.406574  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:35.906893  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:36.407113  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:36.906298  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:37.406622  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:37.907149  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:38.413066  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:38.907525  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:39.410176  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:39.907043  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:40.407653  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:40.908401  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:41.408458  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:41.908554  449501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 18:32:42.409794  449501 kapi.go:107] duration metric: took 1m8.007590096s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0807 18:32:58.336964  449501 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0807 18:32:58.336994  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:32:58.831641  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:32:59.331314  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:32:59.831677  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:00.372920  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:00.831331  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:01.332557  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:01.831988  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:02.332422  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:02.832686  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:03.332217  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:03.831529  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:04.331391  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:04.832104  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:05.332222  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:05.831773  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:06.331607  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:06.831986  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:07.331449  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:07.832286  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:08.331346  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:08.831540  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:09.331859  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:09.831204  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:10.331618  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:10.831312  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:11.332202  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:11.831418  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:12.331185  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:12.831922  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:13.331517  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:13.832516  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:14.336775  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:14.831821  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:15.331475  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:15.832183  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:16.331911  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:16.831289  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:17.332194  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:17.831381  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:18.333149  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:18.832017  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:19.332077  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:19.831734  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:20.331374  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:20.832001  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:21.331517  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:21.832446  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:22.331723  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:22.831571  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:23.332681  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:23.832215  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:24.332540  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:24.832286  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:25.331911  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:25.831399  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:26.332566  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:26.832135  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:27.332186  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:27.831502  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:28.332209  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:28.831838  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:29.331533  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:29.832079  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:30.331981  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:30.832559  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:31.332281  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:31.831333  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:32.331343  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:32.831964  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:33.331326  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:33.831575  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:34.331586  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:34.832535  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:35.345362  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:35.831433  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:36.331815  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:36.831337  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:37.337246  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:37.831759  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:38.333791  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:38.831595  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:39.331732  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:39.831331  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:40.332314  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:40.832077  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:41.331425  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:41.831347  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:42.332176  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:42.831762  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:43.331443  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:43.831957  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:44.332118  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:44.832294  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:45.347778  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:45.831929  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:46.331653  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:46.831924  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:47.331750  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:47.831446  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:48.334562  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:48.832700  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:49.331355  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:49.831535  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:50.332525  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:50.832217  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:51.331517  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:51.831981  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:52.332019  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:52.832400  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:53.332498  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:53.832473  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:54.332579  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:54.831643  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:55.331872  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:55.832201  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:56.331700  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:56.832208  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:57.332755  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:57.831626  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:58.333408  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:58.832603  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:59.331375  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:33:59.832623  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:34:00.347011  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:34:00.831950  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:34:01.332199  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:34:01.831729  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:34:02.332444  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:34:02.832219  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:34:03.331559  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:34:03.832279  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:34:04.334085  449501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 18:34:04.832163  449501 kapi.go:107] duration metric: took 2m28.504098978s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0807 18:34:04.834160  449501 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-553671 cluster.
	I0807 18:34:04.836049  449501 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0807 18:34:04.838358  449501 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0807 18:34:04.840085  449501 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, storage-provisioner, ingress-dns, volcano, cloud-spanner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0807 18:34:04.841454  449501 addons.go:510] duration metric: took 2m41.152974441s for enable addons: enabled=[nvidia-device-plugin default-storageclass storage-provisioner ingress-dns volcano cloud-spanner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0807 18:34:04.841507  449501 start.go:246] waiting for cluster config update ...
	I0807 18:34:04.841529  449501 start.go:255] writing updated cluster config ...
	I0807 18:34:04.841821  449501 ssh_runner.go:195] Run: rm -f paused
	I0807 18:34:05.195536  449501 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0807 18:34:05.198251  449501 out.go:177] * Done! kubectl is now configured to use "addons-553671" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	efed68ef99935       d1ca868ab82aa       2 minutes ago       Exited              gadget                                   5                   89b99ced37117       gadget-jvc5n
	86c037d27f4ef       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   a41a2ef861c06       gcp-auth-5db96cd9b4-fh9st
	a6eb7ae72da58       8b46b1cd48760       4 minutes ago       Running             admission                                0                   d5510752e4359       volcano-admission-5f7844f7bc-n945q
	94bf726107bd1       24f8f979639f1       4 minutes ago       Running             controller                               0                   905bd28cd27f3       ingress-nginx-controller-6d9bd977d4-7d26r
	6ab741b699a7c       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   9594989d6cb92       csi-hostpathplugin-jhs59
	32992772d040c       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   9594989d6cb92       csi-hostpathplugin-jhs59
	19644db36e021       922312104da8a       4 minutes ago       Running             liveness-probe                           0                   9594989d6cb92       csi-hostpathplugin-jhs59
	95f2b01946b83       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   9594989d6cb92       csi-hostpathplugin-jhs59
	7ef1b65f9a641       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   9594989d6cb92       csi-hostpathplugin-jhs59
	3b149879e5ef3       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   198a0a3f297ed       volcano-scheduler-844f6db89b-wwz7p
	e13fe2b4fdca8       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   91993dca8b06e       csi-hostpath-attacher-0
	5f758af59294b       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   9594989d6cb92       csi-hostpathplugin-jhs59
	4f93838a83b01       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   0279c1ffc24c2       csi-hostpath-resizer-0
	9e0f201a89004       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   9e7d75a30f948       snapshot-controller-745499f584-gkxc2
	0b119816606b8       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   79e3868c3c182       volcano-controllers-59cb4746db-2j5hv
	c7ef0375f0dd7       77bdba588b953       5 minutes ago       Running             yakd                                     0                   9034decdcab38       yakd-dashboard-799879c74f-vss4r
	23eecc8c01422       296b5f799fcd8       5 minutes ago       Exited              patch                                    1                   92e10c70891f7       ingress-nginx-admission-patch-fqs2w
	8ed6262be1bf3       296b5f799fcd8       5 minutes ago       Exited              create                                   0                   c948e20e35bde       ingress-nginx-admission-create-flnrz
	b82522576e01b       95dccb4df54ab       5 minutes ago       Running             metrics-server                           0                   eb1880104f6aa       metrics-server-c59844bb4-tkfkt
	52eca3396dc84       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   27fa2aa3a4af4       snapshot-controller-745499f584-85qnx
	dc011bbcb3cf5       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   647500c17952f       local-path-provisioner-8d985888d-f94mg
	9bd82a69cb23b       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   3fd48f60c8706       registry-proxy-t8rmr
	2d65509a9ea23       53af6e2c4c343       5 minutes ago       Running             cloud-spanner-emulator                   0                   70a89757d6d34       cloud-spanner-emulator-5455fb9b69-tzgsj
	5e7425d3a2afe       6fed88f43b276       5 minutes ago       Running             registry                                 0                   f250ea1692ccd       registry-698f998955-4rlp5
	3b7571903f634       2437cf7621777       5 minutes ago       Running             coredns                                  0                   624807371a1ee       coredns-7db6d8ff4d-xnbjz
	d76827df0ffa5       e396bbd29d2f6       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   f3ca9d397428b       nvidia-device-plugin-daemonset-xf5g4
	d5463f140c72c       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   f0b7a6bd42b78       kube-ingress-dns-minikube
	ecb149694091e       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   aff584c897b39       storage-provisioner
	974dd20c2a095       d5e283bc63d43       5 minutes ago       Running             kindnet-cni                              0                   8ab9a9834bee5       kindnet-zhdtg
	81ff73570920b       2351f570ed0ea       5 minutes ago       Running             kube-proxy                               0                   6de07018ad535       kube-proxy-2kmrh
	82889a2eeadce       8e97cdb19e7cc       6 minutes ago       Running             kube-controller-manager                  0                   a850a362d234b       kube-controller-manager-addons-553671
	aac415a199639       d48f992a22722       6 minutes ago       Running             kube-scheduler                           0                   5be8a61b4bc41       kube-scheduler-addons-553671
	f4ea0e8c4445a       61773190d42ff       6 minutes ago       Running             kube-apiserver                           0                   04be9a4ed47ae       kube-apiserver-addons-553671
	6804388fa0ce1       014faa467e297       6 minutes ago       Running             etcd                                     0                   e36f2aa683ac3       etcd-addons-553671
	
	
	==> containerd <==
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.455107618Z" level=info msg="RemoveContainer for \"79accb4a3ebc925ba8641afef9fb1ff6d7f9de0e0904a61c658e39341eb6222e\" returns successfully"
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.457123166Z" level=info msg="StopPodSandbox for \"24de7ca2dbffb4261cb8f117fb896b5cf00488f28f71e350752a8dfaeae4505f\""
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.469349644Z" level=info msg="TearDown network for sandbox \"24de7ca2dbffb4261cb8f117fb896b5cf00488f28f71e350752a8dfaeae4505f\" successfully"
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.469546289Z" level=info msg="StopPodSandbox for \"24de7ca2dbffb4261cb8f117fb896b5cf00488f28f71e350752a8dfaeae4505f\" returns successfully"
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.470285289Z" level=info msg="RemovePodSandbox for \"24de7ca2dbffb4261cb8f117fb896b5cf00488f28f71e350752a8dfaeae4505f\""
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.470332254Z" level=info msg="Forcibly stopping sandbox \"24de7ca2dbffb4261cb8f117fb896b5cf00488f28f71e350752a8dfaeae4505f\""
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.497090822Z" level=info msg="TearDown network for sandbox \"24de7ca2dbffb4261cb8f117fb896b5cf00488f28f71e350752a8dfaeae4505f\" successfully"
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.503933433Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24de7ca2dbffb4261cb8f117fb896b5cf00488f28f71e350752a8dfaeae4505f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.504062174Z" level=info msg="RemovePodSandbox \"24de7ca2dbffb4261cb8f117fb896b5cf00488f28f71e350752a8dfaeae4505f\" returns successfully"
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.505227260Z" level=info msg="StopPodSandbox for \"f8313c449cc7470f4957ab3e5c4c85838c47600b41be46da8b86604eed5a2cab\""
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.513074371Z" level=info msg="TearDown network for sandbox \"f8313c449cc7470f4957ab3e5c4c85838c47600b41be46da8b86604eed5a2cab\" successfully"
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.513124757Z" level=info msg="StopPodSandbox for \"f8313c449cc7470f4957ab3e5c4c85838c47600b41be46da8b86604eed5a2cab\" returns successfully"
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.513574414Z" level=info msg="RemovePodSandbox for \"f8313c449cc7470f4957ab3e5c4c85838c47600b41be46da8b86604eed5a2cab\""
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.513620738Z" level=info msg="Forcibly stopping sandbox \"f8313c449cc7470f4957ab3e5c4c85838c47600b41be46da8b86604eed5a2cab\""
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.522408336Z" level=info msg="TearDown network for sandbox \"f8313c449cc7470f4957ab3e5c4c85838c47600b41be46da8b86604eed5a2cab\" successfully"
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.528503979Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8313c449cc7470f4957ab3e5c4c85838c47600b41be46da8b86604eed5a2cab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.528620849Z" level=info msg="RemovePodSandbox \"f8313c449cc7470f4957ab3e5c4c85838c47600b41be46da8b86604eed5a2cab\" returns successfully"
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.529242651Z" level=info msg="StopPodSandbox for \"57522a7809fd329e635540d1b6d98ac0032d69effb3fee93893c2523def82629\""
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.537232001Z" level=info msg="TearDown network for sandbox \"57522a7809fd329e635540d1b6d98ac0032d69effb3fee93893c2523def82629\" successfully"
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.537395048Z" level=info msg="StopPodSandbox for \"57522a7809fd329e635540d1b6d98ac0032d69effb3fee93893c2523def82629\" returns successfully"
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.537953895Z" level=info msg="RemovePodSandbox for \"57522a7809fd329e635540d1b6d98ac0032d69effb3fee93893c2523def82629\""
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.538026959Z" level=info msg="Forcibly stopping sandbox \"57522a7809fd329e635540d1b6d98ac0032d69effb3fee93893c2523def82629\""
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.545895722Z" level=info msg="TearDown network for sandbox \"57522a7809fd329e635540d1b6d98ac0032d69effb3fee93893c2523def82629\" successfully"
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.551822633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57522a7809fd329e635540d1b6d98ac0032d69effb3fee93893c2523def82629\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 07 18:35:10 addons-553671 containerd[817]: time="2024-08-07T18:35:10.551981167Z" level=info msg="RemovePodSandbox \"57522a7809fd329e635540d1b6d98ac0032d69effb3fee93893c2523def82629\" returns successfully"
	
	
	==> coredns [3b7571903f63478a92bb85d683aa3a71a1a7519ac02a1dd394a3327df78348f8] <==
	[INFO] 10.244.0.5:54881 - 58718 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079202s
	[INFO] 10.244.0.5:46922 - 46268 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00225012s
	[INFO] 10.244.0.5:46922 - 5050 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001849185s
	[INFO] 10.244.0.5:39579 - 59164 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000107075s
	[INFO] 10.244.0.5:39579 - 3858 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000078825s
	[INFO] 10.244.0.5:47412 - 17829 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000166979s
	[INFO] 10.244.0.5:47412 - 40105 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0002999s
	[INFO] 10.244.0.5:43068 - 3211 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110578s
	[INFO] 10.244.0.5:43068 - 35469 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094226s
	[INFO] 10.244.0.5:58025 - 33962 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050788s
	[INFO] 10.244.0.5:58025 - 49324 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000154917s
	[INFO] 10.244.0.5:49379 - 43835 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001766002s
	[INFO] 10.244.0.5:49379 - 9268 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001599951s
	[INFO] 10.244.0.5:58402 - 27305 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000057451s
	[INFO] 10.244.0.5:58402 - 47524 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040877s
	[INFO] 10.244.0.24:60551 - 61580 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000167513s
	[INFO] 10.244.0.24:45925 - 4626 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000085914s
	[INFO] 10.244.0.24:33881 - 56042 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154631s
	[INFO] 10.244.0.24:56281 - 22681 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000072728s
	[INFO] 10.244.0.24:47331 - 22450 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115206s
	[INFO] 10.244.0.24:49843 - 17994 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000083773s
	[INFO] 10.244.0.24:59044 - 52528 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002951809s
	[INFO] 10.244.0.24:50542 - 3465 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002225013s
	[INFO] 10.244.0.24:47031 - 41297 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001000168s
	[INFO] 10.244.0.24:48218 - 22316 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001007627s
	
	
	==> describe nodes <==
	Name:               addons-553671
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-553671
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=addons-553671
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T18_31_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-553671
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-553671"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:31:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-553671
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:37:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:34:14 +0000   Wed, 07 Aug 2024 18:31:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:34:14 +0000   Wed, 07 Aug 2024 18:31:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:34:14 +0000   Wed, 07 Aug 2024 18:31:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:34:14 +0000   Wed, 07 Aug 2024 18:31:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-553671
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 5de74ac29361477b9ec8f7ae5997cc43
	  System UUID:                7f9c5525-789d-475c-893a-674ceb5d0bc3
	  Boot ID:                    1ae5b520-001f-49c1-b434-c6991d6f5702
	  Kernel Version:             5.15.0-1066-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.19
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5455fb9b69-tzgsj      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  gadget                      gadget-jvc5n                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  gcp-auth                    gcp-auth-5db96cd9b4-fh9st                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  ingress-nginx               ingress-nginx-controller-6d9bd977d4-7d26r    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         5m51s
	  kube-system                 coredns-7db6d8ff4d-xnbjz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m59s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  kube-system                 csi-hostpathplugin-jhs59                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  kube-system                 etcd-addons-553671                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m13s
	  kube-system                 kindnet-zhdtg                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m
	  kube-system                 kube-apiserver-addons-553671                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-controller-manager-addons-553671        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-proxy-2kmrh                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-scheduler-addons-553671                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 metrics-server-c59844bb4-tkfkt               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m54s
	  kube-system                 nvidia-device-plugin-daemonset-xf5g4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 registry-698f998955-4rlp5                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 registry-proxy-t8rmr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 snapshot-controller-745499f584-85qnx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  kube-system                 snapshot-controller-745499f584-gkxc2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  local-path-storage          local-path-provisioner-8d985888d-f94mg       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  volcano-system              volcano-admission-5f7844f7bc-n945q           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  volcano-system              volcano-controllers-59cb4746db-2j5hv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  volcano-system              volcano-scheduler-844f6db89b-wwz7p           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  yakd-dashboard              yakd-dashboard-799879c74f-vss4r              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%!)(MISSING)  100m (5%!)(MISSING)
	  memory             638Mi (8%!)(MISSING)   476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m58s  kube-proxy       
	  Normal  Starting                 6m13s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m13s  kubelet          Node addons-553671 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m13s  kubelet          Node addons-553671 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m13s  kubelet          Node addons-553671 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             6m13s  kubelet          Node addons-553671 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  6m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m13s  kubelet          Node addons-553671 status is now: NodeReady
	  Normal  RegisteredNode           6m1s   node-controller  Node addons-553671 event: Registered Node addons-553671 in Controller
	
	
	==> dmesg <==
	[  +0.000878] FS-Cache: N-cookie d=00000000821c87cc{9p.inode} n=000000002bfc3b6b
	[  +0.000959] FS-Cache: N-key=[8] '8a6ced0000000000'
	[  +0.002894] FS-Cache: Duplicate cookie detected
	[  +0.000632] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000877] FS-Cache: O-cookie d=00000000821c87cc{9p.inode} n=000000002c79267e
	[  +0.001001] FS-Cache: O-key=[8] '8a6ced0000000000'
	[  +0.000652] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000885] FS-Cache: N-cookie d=00000000821c87cc{9p.inode} n=000000006ce552ab
	[  +0.000980] FS-Cache: N-key=[8] '8a6ced0000000000'
	[  +2.429817] FS-Cache: Duplicate cookie detected
	[  +0.000885] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000912] FS-Cache: O-cookie d=00000000821c87cc{9p.inode} n=00000000442a717b
	[  +0.000996] FS-Cache: O-key=[8] '896ced0000000000'
	[  +0.000714] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000890] FS-Cache: N-cookie d=00000000821c87cc{9p.inode} n=000000002bfc3b6b
	[  +0.000991] FS-Cache: N-key=[8] '896ced0000000000'
	[  +0.339826] FS-Cache: Duplicate cookie detected
	[  +0.000708] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001071] FS-Cache: O-cookie d=00000000821c87cc{9p.inode} n=000000006ab16776
	[  +0.001141] FS-Cache: O-key=[8] '946ced0000000000'
	[  +0.000757] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000953] FS-Cache: N-cookie d=00000000821c87cc{9p.inode} n=00000000ea7a00cd
	[  +0.001071] FS-Cache: N-key=[8] '946ced0000000000'
	[Aug 7 17:24] hrtimer: interrupt took 4890189 ns
	[Aug 7 18:01] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [6804388fa0ce16d674e7a76beb1661557d29a7f5f5e0ab7bfaca52056c18ee59] <==
	{"level":"info","ts":"2024-08-07T18:31:03.950489Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-07T18:31:03.949618Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-07T18:31:03.949701Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T18:31:03.952022Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-07T18:31:03.954679Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-07T18:31:03.954991Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T18:31:03.955231Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T18:31:04.904397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-07T18:31:04.904508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-07T18:31:04.904555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-07T18:31:04.90462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-07T18:31:04.904647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-07T18:31:04.904705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-07T18:31:04.904764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-07T18:31:04.908172Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T18:31:04.912579Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-553671 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-07T18:31:04.912665Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T18:31:04.913034Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T18:31:04.915096Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-07T18:31:04.920431Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T18:31:04.920583Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T18:31:04.920641Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T18:31:04.952446Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T18:31:04.95256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-07T18:31:04.958023Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [86c037d27f4ef9229100e8a8adeaaf62e96a05fab2d0256446595d6a909fa21d] <==
	2024/08/07 18:34:04 GCP Auth Webhook started!
	2024/08/07 18:34:21 Ready to marshal response ...
	2024/08/07 18:34:21 Ready to write response ...
	2024/08/07 18:34:22 Ready to marshal response ...
	2024/08/07 18:34:22 Ready to write response ...
	
	
	==> kernel <==
	 18:37:24 up  2:19,  0 users,  load average: 0.08, 1.28, 2.52
	Linux addons-553671 5.15.0-1066-aws #72~20.04.1-Ubuntu SMP Sat Jul 20 07:44:07 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [974dd20c2a095766e38772f8233cc75fefbb9c4980f98db4c441befc4bfb9a1e] <==
	E0807 18:36:06.643144       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0807 18:36:07.241508       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0807 18:36:07.241703       1 main.go:299] handling current node
	I0807 18:36:17.242022       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0807 18:36:17.242059       1 main.go:299] handling current node
	I0807 18:36:27.242103       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0807 18:36:27.242140       1 main.go:299] handling current node
	W0807 18:36:28.446897       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0807 18:36:28.446940       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0807 18:36:37.241988       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0807 18:36:37.242032       1 main.go:299] handling current node
	W0807 18:36:40.459647       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0807 18:36:40.459684       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0807 18:36:47.241528       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0807 18:36:47.241565       1 main.go:299] handling current node
	I0807 18:36:57.241870       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0807 18:36:57.241978       1 main.go:299] handling current node
	W0807 18:37:04.014764       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 18:37:04.014809       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 18:37:05.407616       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0807 18:37:05.407657       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0807 18:37:07.241394       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0807 18:37:07.241435       1 main.go:299] handling current node
	I0807 18:37:17.241492       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0807 18:37:17.241529       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f4ea0e8c4445acd85ea750b9a9f844449a01e79ca27c5eb356157d0557585fd4] <==
	W0807 18:32:38.520528       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.170.74:443: connect: connection refused
	W0807 18:32:39.282580       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.165.18:443: connect: connection refused
	E0807 18:32:39.282626       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.165.18:443: connect: connection refused
	W0807 18:32:39.283006       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.101.170.74:443: connect: connection refused
	W0807 18:32:39.368533       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.165.18:443: connect: connection refused
	E0807 18:32:39.368595       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.165.18:443: connect: connection refused
	W0807 18:32:39.369021       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.101.170.74:443: connect: connection refused
	W0807 18:32:39.605205       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.170.74:443: connect: connection refused
	W0807 18:32:40.644122       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.170.74:443: connect: connection refused
	W0807 18:32:41.670596       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.170.74:443: connect: connection refused
	W0807 18:32:42.746257       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.170.74:443: connect: connection refused
	W0807 18:32:43.822326       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.170.74:443: connect: connection refused
	W0807 18:32:44.914255       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.170.74:443: connect: connection refused
	W0807 18:32:45.930914       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.170.74:443: connect: connection refused
	W0807 18:32:46.951006       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.170.74:443: connect: connection refused
	W0807 18:32:48.002055       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.170.74:443: connect: connection refused
	W0807 18:32:49.055248       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.170.74:443: connect: connection refused
	W0807 18:32:58.253882       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.165.18:443: connect: connection refused
	E0807 18:32:58.253926       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.165.18:443: connect: connection refused
	W0807 18:33:39.290323       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.165.18:443: connect: connection refused
	E0807 18:33:39.290361       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.165.18:443: connect: connection refused
	W0807 18:33:39.373896       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.165.18:443: connect: connection refused
	E0807 18:33:39.373934       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.165.18:443: connect: connection refused
	I0807 18:34:21.746989       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0807 18:34:21.779779       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [82889a2eeadce51bf4b4a87a24a0276ea4cf7436506cc60f288080c666d0031e] <==
	I0807 18:33:39.315293       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0807 18:33:39.325447       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0807 18:33:39.382918       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0807 18:33:39.389407       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0807 18:33:39.396257       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0807 18:33:39.407076       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0807 18:33:40.359679       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0807 18:33:40.374232       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0807 18:33:41.472767       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0807 18:33:41.495084       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0807 18:33:42.385299       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0807 18:33:42.398431       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0807 18:33:42.479867       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0807 18:33:42.489686       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0807 18:33:42.498644       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0807 18:33:42.502555       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0807 18:33:42.513462       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0807 18:33:42.518684       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0807 18:34:04.465368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="16.341246ms"
	I0807 18:34:04.465560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="78.473µs"
	I0807 18:34:12.024319       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0807 18:34:12.027034       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0807 18:34:12.086797       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0807 18:34:12.089061       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0807 18:34:21.452956       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init"
	
	
	==> kube-proxy [81ff73570920b67de0dfe660bf25bec12a1aba5922e3206b317ed9df7ac6d04c] <==
	I0807 18:31:24.981621       1 server_linux.go:69] "Using iptables proxy"
	I0807 18:31:25.016825       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0807 18:31:25.199841       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0807 18:31:25.199892       1 server_linux.go:165] "Using iptables Proxier"
	I0807 18:31:25.217254       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0807 18:31:25.221630       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0807 18:31:25.221684       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 18:31:25.221910       1 server.go:872] "Version info" version="v1.30.3"
	I0807 18:31:25.221931       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 18:31:25.247896       1 config.go:192] "Starting service config controller"
	I0807 18:31:25.247926       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 18:31:25.247974       1 config.go:101] "Starting endpoint slice config controller"
	I0807 18:31:25.247979       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 18:31:25.250748       1 config.go:319] "Starting node config controller"
	I0807 18:31:25.250766       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 18:31:25.348950       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 18:31:25.349020       1 shared_informer.go:320] Caches are synced for service config
	I0807 18:31:25.350932       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [aac415a199639245bdd0951f2b778da809688005d82f17d8af0354cb1a59ccb5] <==
	W0807 18:31:07.712596       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0807 18:31:07.712625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0807 18:31:07.712736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 18:31:07.712754       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 18:31:07.712937       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0807 18:31:07.712968       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 18:31:08.541014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0807 18:31:08.541060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0807 18:31:08.546509       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 18:31:08.546553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 18:31:08.604294       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0807 18:31:08.604614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0807 18:31:08.612150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0807 18:31:08.612188       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0807 18:31:08.687220       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0807 18:31:08.687449       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0807 18:31:08.695190       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0807 18:31:08.695442       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0807 18:31:08.794330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0807 18:31:08.794372       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0807 18:31:08.842167       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0807 18:31:08.842213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0807 18:31:09.013214       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0807 18:31:09.013270       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0807 18:31:11.480196       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 07 18:35:21 addons-553671 kubelet[1548]: I0807 18:35:21.400309    1548 scope.go:117] "RemoveContainer" containerID="efed68ef99935d90e4c0c2468e6216163a3117b050bdc223670f5dd54eed5dcf"
	Aug 07 18:35:21 addons-553671 kubelet[1548]: E0807 18:35:21.400867    1548 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jvc5n_gadget(f28419c4-a615-4cf7-bde0-5068511a7e9d)\"" pod="gadget/gadget-jvc5n" podUID="f28419c4-a615-4cf7-bde0-5068511a7e9d"
	Aug 07 18:35:34 addons-553671 kubelet[1548]: I0807 18:35:34.401054    1548 scope.go:117] "RemoveContainer" containerID="efed68ef99935d90e4c0c2468e6216163a3117b050bdc223670f5dd54eed5dcf"
	Aug 07 18:35:34 addons-553671 kubelet[1548]: E0807 18:35:34.401615    1548 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jvc5n_gadget(f28419c4-a615-4cf7-bde0-5068511a7e9d)\"" pod="gadget/gadget-jvc5n" podUID="f28419c4-a615-4cf7-bde0-5068511a7e9d"
	Aug 07 18:35:40 addons-553671 kubelet[1548]: I0807 18:35:40.401963    1548 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-698f998955-4rlp5" secret="" err="secret \"gcp-auth\" not found"
	Aug 07 18:35:43 addons-553671 kubelet[1548]: I0807 18:35:43.401079    1548 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-t8rmr" secret="" err="secret \"gcp-auth\" not found"
	Aug 07 18:35:45 addons-553671 kubelet[1548]: I0807 18:35:45.400177    1548 scope.go:117] "RemoveContainer" containerID="efed68ef99935d90e4c0c2468e6216163a3117b050bdc223670f5dd54eed5dcf"
	Aug 07 18:35:45 addons-553671 kubelet[1548]: E0807 18:35:45.400737    1548 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jvc5n_gadget(f28419c4-a615-4cf7-bde0-5068511a7e9d)\"" pod="gadget/gadget-jvc5n" podUID="f28419c4-a615-4cf7-bde0-5068511a7e9d"
	Aug 07 18:35:59 addons-553671 kubelet[1548]: I0807 18:35:59.400078    1548 scope.go:117] "RemoveContainer" containerID="efed68ef99935d90e4c0c2468e6216163a3117b050bdc223670f5dd54eed5dcf"
	Aug 07 18:35:59 addons-553671 kubelet[1548]: E0807 18:35:59.400759    1548 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jvc5n_gadget(f28419c4-a615-4cf7-bde0-5068511a7e9d)\"" pod="gadget/gadget-jvc5n" podUID="f28419c4-a615-4cf7-bde0-5068511a7e9d"
	Aug 07 18:36:14 addons-553671 kubelet[1548]: I0807 18:36:14.401185    1548 scope.go:117] "RemoveContainer" containerID="efed68ef99935d90e4c0c2468e6216163a3117b050bdc223670f5dd54eed5dcf"
	Aug 07 18:36:14 addons-553671 kubelet[1548]: E0807 18:36:14.401776    1548 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jvc5n_gadget(f28419c4-a615-4cf7-bde0-5068511a7e9d)\"" pod="gadget/gadget-jvc5n" podUID="f28419c4-a615-4cf7-bde0-5068511a7e9d"
	Aug 07 18:36:18 addons-553671 kubelet[1548]: I0807 18:36:18.400849    1548 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xf5g4" secret="" err="secret \"gcp-auth\" not found"
	Aug 07 18:36:28 addons-553671 kubelet[1548]: I0807 18:36:28.400735    1548 scope.go:117] "RemoveContainer" containerID="efed68ef99935d90e4c0c2468e6216163a3117b050bdc223670f5dd54eed5dcf"
	Aug 07 18:36:28 addons-553671 kubelet[1548]: E0807 18:36:28.401843    1548 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jvc5n_gadget(f28419c4-a615-4cf7-bde0-5068511a7e9d)\"" pod="gadget/gadget-jvc5n" podUID="f28419c4-a615-4cf7-bde0-5068511a7e9d"
	Aug 07 18:36:43 addons-553671 kubelet[1548]: I0807 18:36:43.400320    1548 scope.go:117] "RemoveContainer" containerID="efed68ef99935d90e4c0c2468e6216163a3117b050bdc223670f5dd54eed5dcf"
	Aug 07 18:36:43 addons-553671 kubelet[1548]: E0807 18:36:43.400891    1548 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jvc5n_gadget(f28419c4-a615-4cf7-bde0-5068511a7e9d)\"" pod="gadget/gadget-jvc5n" podUID="f28419c4-a615-4cf7-bde0-5068511a7e9d"
	Aug 07 18:36:56 addons-553671 kubelet[1548]: I0807 18:36:56.402409    1548 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-t8rmr" secret="" err="secret \"gcp-auth\" not found"
	Aug 07 18:36:56 addons-553671 kubelet[1548]: I0807 18:36:56.403330    1548 scope.go:117] "RemoveContainer" containerID="efed68ef99935d90e4c0c2468e6216163a3117b050bdc223670f5dd54eed5dcf"
	Aug 07 18:36:56 addons-553671 kubelet[1548]: E0807 18:36:56.403782    1548 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jvc5n_gadget(f28419c4-a615-4cf7-bde0-5068511a7e9d)\"" pod="gadget/gadget-jvc5n" podUID="f28419c4-a615-4cf7-bde0-5068511a7e9d"
	Aug 07 18:37:04 addons-553671 kubelet[1548]: I0807 18:37:04.400448    1548 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-698f998955-4rlp5" secret="" err="secret \"gcp-auth\" not found"
	Aug 07 18:37:11 addons-553671 kubelet[1548]: I0807 18:37:11.400669    1548 scope.go:117] "RemoveContainer" containerID="efed68ef99935d90e4c0c2468e6216163a3117b050bdc223670f5dd54eed5dcf"
	Aug 07 18:37:11 addons-553671 kubelet[1548]: E0807 18:37:11.401163    1548 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jvc5n_gadget(f28419c4-a615-4cf7-bde0-5068511a7e9d)\"" pod="gadget/gadget-jvc5n" podUID="f28419c4-a615-4cf7-bde0-5068511a7e9d"
	Aug 07 18:37:23 addons-553671 kubelet[1548]: I0807 18:37:23.400840    1548 scope.go:117] "RemoveContainer" containerID="efed68ef99935d90e4c0c2468e6216163a3117b050bdc223670f5dd54eed5dcf"
	Aug 07 18:37:23 addons-553671 kubelet[1548]: E0807 18:37:23.401389    1548 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jvc5n_gadget(f28419c4-a615-4cf7-bde0-5068511a7e9d)\"" pod="gadget/gadget-jvc5n" podUID="f28419c4-a615-4cf7-bde0-5068511a7e9d"
	
	
	==> storage-provisioner [ecb149694091e1c34ff87778a6a0855db32734a400ed8890573f5a10f1807332] <==
	I0807 18:31:29.409236       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0807 18:31:29.436282       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0807 18:31:29.436336       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0807 18:31:29.446277       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0807 18:31:29.448336       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2fb3f453-dbb3-44bc-81e8-b581a1b8627f", APIVersion:"v1", ResourceVersion:"522", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-553671_8dbae9db-3380-4a8a-bfcf-f3e766ecc864 became leader
	I0807 18:31:29.448682       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-553671_8dbae9db-3380-4a8a-bfcf-f3e766ecc864!
	I0807 18:31:29.549759       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-553671_8dbae9db-3380-4a8a-bfcf-f3e766ecc864!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-553671 -n addons-553671
helpers_test.go:261: (dbg) Run:  kubectl --context addons-553671 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-flnrz ingress-nginx-admission-patch-fqs2w test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-553671 describe pod ingress-nginx-admission-create-flnrz ingress-nginx-admission-patch-fqs2w test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-553671 describe pod ingress-nginx-admission-create-flnrz ingress-nginx-admission-patch-fqs2w test-job-nginx-0: exit status 1 (95.647849ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-flnrz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fqs2w" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-553671 describe pod ingress-nginx-admission-create-flnrz ingress-nginx-admission-patch-fqs2w test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (380.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-145103 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-145103 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m16.231496197s)

                                                
                                                
-- stdout --
	* [old-k8s-version-145103] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-145103" primary control-plane node in "old-k8s-version-145103" cluster
	* Pulling base image v0.0.44-1723026928-19389 ...
	* Restarting existing docker container for "old-k8s-version-145103" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.19 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-145103 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 19:22:09.877395  655176 out.go:291] Setting OutFile to fd 1 ...
	I0807 19:22:09.877662  655176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:22:09.877692  655176 out.go:304] Setting ErrFile to fd 2...
	I0807 19:22:09.877711  655176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:22:09.877966  655176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
	I0807 19:22:09.878361  655176 out.go:298] Setting JSON to false
	I0807 19:22:09.879368  655176 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11081,"bootTime":1723047449,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0807 19:22:09.879470  655176 start.go:139] virtualization:  
	I0807 19:22:09.882249  655176 out.go:177] * [old-k8s-version-145103] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0807 19:22:09.884839  655176 notify.go:220] Checking for updates...
	I0807 19:22:09.885436  655176 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 19:22:09.887459  655176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 19:22:09.889441  655176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	I0807 19:22:09.891631  655176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	I0807 19:22:09.893435  655176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0807 19:22:09.895185  655176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 19:22:09.897428  655176 config.go:182] Loaded profile config "old-k8s-version-145103": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0807 19:22:09.899602  655176 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0807 19:22:09.901487  655176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 19:22:09.923831  655176 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0807 19:22:09.923970  655176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 19:22:10.017180  655176 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-07 19:22:10.002177456 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 19:22:10.017298  655176 docker.go:307] overlay module found
	I0807 19:22:10.019601  655176 out.go:177] * Using the docker driver based on existing profile
	I0807 19:22:10.022207  655176 start.go:297] selected driver: docker
	I0807 19:22:10.022242  655176 start.go:901] validating driver "docker" against &{Name:old-k8s-version-145103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145103 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:22:10.022359  655176 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 19:22:10.023047  655176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 19:22:10.123790  655176 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-07 19:22:10.113530206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 19:22:10.124202  655176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 19:22:10.124226  655176 cni.go:84] Creating CNI manager for ""
	I0807 19:22:10.124235  655176 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0807 19:22:10.124284  655176 start.go:340] cluster config:
	{Name:old-k8s-version-145103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:22:10.127257  655176 out.go:177] * Starting "old-k8s-version-145103" primary control-plane node in "old-k8s-version-145103" cluster
	I0807 19:22:10.129525  655176 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0807 19:22:10.131543  655176 out.go:177] * Pulling base image v0.0.44-1723026928-19389 ...
	I0807 19:22:10.133219  655176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0807 19:22:10.133291  655176 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0807 19:22:10.133302  655176 cache.go:56] Caching tarball of preloaded images
	I0807 19:22:10.133431  655176 preload.go:172] Found /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 19:22:10.133442  655176 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0807 19:22:10.133556  655176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/config.json ...
	I0807 19:22:10.133786  655176 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local docker daemon
	W0807 19:22:10.152609  655176 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 is of wrong architecture
	I0807 19:22:10.152630  655176 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 to local cache
	I0807 19:22:10.152708  655176 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local cache directory
	I0807 19:22:10.152727  655176 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local cache directory, skipping pull
	I0807 19:22:10.152731  655176 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 exists in cache, skipping pull
	I0807 19:22:10.152740  655176 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 as a tarball
	I0807 19:22:10.152753  655176 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 from local cache
	I0807 19:22:10.306545  655176 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 from cached tarball
	I0807 19:22:10.306581  655176 cache.go:194] Successfully downloaded all kic artifacts
	I0807 19:22:10.306611  655176 start.go:360] acquireMachinesLock for old-k8s-version-145103: {Name:mk90363bb767868395eefd84e25d39fae356d0c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:22:10.306676  655176 start.go:364] duration metric: took 40.918µs to acquireMachinesLock for "old-k8s-version-145103"
	I0807 19:22:10.306702  655176 start.go:96] Skipping create...Using existing machine configuration
	I0807 19:22:10.306719  655176 fix.go:54] fixHost starting: 
	I0807 19:22:10.306992  655176 cli_runner.go:164] Run: docker container inspect old-k8s-version-145103 --format={{.State.Status}}
	I0807 19:22:10.326673  655176 fix.go:112] recreateIfNeeded on old-k8s-version-145103: state=Stopped err=<nil>
	W0807 19:22:10.326712  655176 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 19:22:10.329261  655176 out.go:177] * Restarting existing docker container for "old-k8s-version-145103" ...
	I0807 19:22:10.331054  655176 cli_runner.go:164] Run: docker start old-k8s-version-145103
	I0807 19:22:10.741711  655176 cli_runner.go:164] Run: docker container inspect old-k8s-version-145103 --format={{.State.Status}}
	I0807 19:22:10.768295  655176 kic.go:430] container "old-k8s-version-145103" state is running.
	I0807 19:22:10.768713  655176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-145103
	I0807 19:22:10.797425  655176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/config.json ...
	I0807 19:22:10.797656  655176 machine.go:94] provisionDockerMachine start ...
	I0807 19:22:10.797715  655176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145103
	I0807 19:22:10.826248  655176 main.go:141] libmachine: Using SSH client type: native
	I0807 19:22:10.826525  655176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I0807 19:22:10.826539  655176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 19:22:10.827288  655176 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56544->127.0.0.1:33458: read: connection reset by peer
	I0807 19:22:13.976300  655176 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-145103
	
	I0807 19:22:13.976382  655176 ubuntu.go:169] provisioning hostname "old-k8s-version-145103"
	I0807 19:22:13.976477  655176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145103
	I0807 19:22:14.006273  655176 main.go:141] libmachine: Using SSH client type: native
	I0807 19:22:14.006546  655176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I0807 19:22:14.006558  655176 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-145103 && echo "old-k8s-version-145103" | sudo tee /etc/hostname
	I0807 19:22:14.170226  655176 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-145103
	
	I0807 19:22:14.170364  655176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145103
	I0807 19:22:14.190510  655176 main.go:141] libmachine: Using SSH client type: native
	I0807 19:22:14.190762  655176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I0807 19:22:14.190779  655176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-145103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-145103/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-145103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 19:22:14.340842  655176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 19:22:14.340908  655176 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19389-443116/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-443116/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-443116/.minikube}
	I0807 19:22:14.340966  655176 ubuntu.go:177] setting up certificates
	I0807 19:22:14.340993  655176 provision.go:84] configureAuth start
	I0807 19:22:14.341082  655176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-145103
	I0807 19:22:14.363752  655176 provision.go:143] copyHostCerts
	I0807 19:22:14.363817  655176 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-443116/.minikube/ca.pem, removing ...
	I0807 19:22:14.363826  655176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-443116/.minikube/ca.pem
	I0807 19:22:14.363911  655176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-443116/.minikube/ca.pem (1082 bytes)
	I0807 19:22:14.364012  655176 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-443116/.minikube/cert.pem, removing ...
	I0807 19:22:14.364018  655176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-443116/.minikube/cert.pem
	I0807 19:22:14.364045  655176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-443116/.minikube/cert.pem (1123 bytes)
	I0807 19:22:14.364106  655176 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-443116/.minikube/key.pem, removing ...
	I0807 19:22:14.364111  655176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-443116/.minikube/key.pem
	I0807 19:22:14.364134  655176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-443116/.minikube/key.pem (1675 bytes)
	I0807 19:22:14.364189  655176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-443116/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-145103 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-145103]
	I0807 19:22:14.780199  655176 provision.go:177] copyRemoteCerts
	I0807 19:22:14.780331  655176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 19:22:14.780405  655176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145103
	I0807 19:22:14.801373  655176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/old-k8s-version-145103/id_rsa Username:docker}
	I0807 19:22:14.908563  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 19:22:14.982699  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 19:22:15.025489  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0807 19:22:15.141328  655176 provision.go:87] duration metric: took 800.305804ms to configureAuth
	I0807 19:22:15.141357  655176 ubuntu.go:193] setting minikube options for container-runtime
	I0807 19:22:15.141693  655176 config.go:182] Loaded profile config "old-k8s-version-145103": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0807 19:22:15.141706  655176 machine.go:97] duration metric: took 4.344042026s to provisionDockerMachine
	I0807 19:22:15.141716  655176 start.go:293] postStartSetup for "old-k8s-version-145103" (driver="docker")
	I0807 19:22:15.141728  655176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 19:22:15.141837  655176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 19:22:15.142041  655176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145103
	I0807 19:22:15.195333  655176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/old-k8s-version-145103/id_rsa Username:docker}
	I0807 19:22:15.318977  655176 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 19:22:15.322540  655176 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0807 19:22:15.322577  655176 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0807 19:22:15.322588  655176 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0807 19:22:15.322595  655176 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0807 19:22:15.322606  655176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-443116/.minikube/addons for local assets ...
	I0807 19:22:15.322666  655176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-443116/.minikube/files for local assets ...
	I0807 19:22:15.322749  655176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-443116/.minikube/files/etc/ssl/certs/4484882.pem -> 4484882.pem in /etc/ssl/certs
	I0807 19:22:15.322865  655176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 19:22:15.334691  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/files/etc/ssl/certs/4484882.pem --> /etc/ssl/certs/4484882.pem (1708 bytes)
	I0807 19:22:15.365134  655176 start.go:296] duration metric: took 223.401286ms for postStartSetup
	I0807 19:22:15.365216  655176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 19:22:15.365283  655176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145103
	I0807 19:22:15.390813  655176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/old-k8s-version-145103/id_rsa Username:docker}
	I0807 19:22:15.504569  655176 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0807 19:22:15.510723  655176 fix.go:56] duration metric: took 5.204006679s for fixHost
	I0807 19:22:15.510756  655176 start.go:83] releasing machines lock for "old-k8s-version-145103", held for 5.204065492s
	I0807 19:22:15.510824  655176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-145103
	I0807 19:22:15.536958  655176 ssh_runner.go:195] Run: cat /version.json
	I0807 19:22:15.537061  655176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145103
	I0807 19:22:15.537307  655176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 19:22:15.537363  655176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145103
	I0807 19:22:15.580257  655176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/old-k8s-version-145103/id_rsa Username:docker}
	I0807 19:22:15.588889  655176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/old-k8s-version-145103/id_rsa Username:docker}
	I0807 19:22:15.706349  655176 ssh_runner.go:195] Run: systemctl --version
	I0807 19:22:15.864073  655176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0807 19:22:15.868724  655176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0807 19:22:15.894270  655176 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0807 19:22:15.894352  655176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 19:22:15.905151  655176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0807 19:22:15.905173  655176 start.go:495] detecting cgroup driver to use...
	I0807 19:22:15.905205  655176 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0807 19:22:15.905255  655176 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0807 19:22:15.922317  655176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 19:22:15.935714  655176 docker.go:217] disabling cri-docker service (if available) ...
	I0807 19:22:15.935773  655176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 19:22:15.950215  655176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 19:22:15.962942  655176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 19:22:16.073162  655176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 19:22:16.203189  655176 docker.go:233] disabling docker service ...
	I0807 19:22:16.203261  655176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 19:22:16.221643  655176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 19:22:16.237529  655176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 19:22:16.419947  655176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 19:22:16.516971  655176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 19:22:16.529591  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 19:22:16.550804  655176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0807 19:22:16.562455  655176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 19:22:16.573761  655176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 19:22:16.573831  655176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 19:22:16.585296  655176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 19:22:16.596644  655176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 19:22:16.616962  655176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 19:22:16.632901  655176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 19:22:16.642760  655176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 19:22:16.653925  655176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 19:22:16.663356  655176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 19:22:16.673027  655176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:22:16.869832  655176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 19:22:17.098041  655176 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0807 19:22:17.098129  655176 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0807 19:22:17.102354  655176 start.go:563] Will wait 60s for crictl version
	I0807 19:22:17.102425  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:22:17.105916  655176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 19:22:17.161484  655176 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.19
	RuntimeApiVersion:  v1
	I0807 19:22:17.161558  655176 ssh_runner.go:195] Run: containerd --version
	I0807 19:22:17.200522  655176 ssh_runner.go:195] Run: containerd --version
	I0807 19:22:17.228599  655176 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.19 ...
	I0807 19:22:17.230509  655176 cli_runner.go:164] Run: docker network inspect old-k8s-version-145103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0807 19:22:17.262920  655176 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0807 19:22:17.266777  655176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 19:22:17.277520  655176 kubeadm.go:883] updating cluster {Name:old-k8s-version-145103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145103 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 19:22:17.277662  655176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0807 19:22:17.277756  655176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:22:17.333482  655176 containerd.go:627] all images are preloaded for containerd runtime.
	I0807 19:22:17.333504  655176 containerd.go:534] Images already preloaded, skipping extraction
	I0807 19:22:17.333571  655176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:22:17.398581  655176 containerd.go:627] all images are preloaded for containerd runtime.
	I0807 19:22:17.398602  655176 cache_images.go:84] Images are preloaded, skipping loading
	I0807 19:22:17.398610  655176 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0807 19:22:17.398726  655176 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-145103 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 19:22:17.398788  655176 ssh_runner.go:195] Run: sudo crictl info
	I0807 19:22:17.441974  655176 cni.go:84] Creating CNI manager for ""
	I0807 19:22:17.442060  655176 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0807 19:22:17.442087  655176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 19:22:17.442136  655176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-145103 NodeName:old-k8s-version-145103 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0807 19:22:17.442306  655176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-145103"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 19:22:17.442415  655176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0807 19:22:17.451651  655176 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 19:22:17.451723  655176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 19:22:17.460490  655176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0807 19:22:17.478765  655176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 19:22:17.497338  655176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0807 19:22:17.520445  655176 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0807 19:22:17.524216  655176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 19:22:17.535021  655176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:22:17.641965  655176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:22:17.670228  655176 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103 for IP: 192.168.85.2
	I0807 19:22:17.670245  655176 certs.go:194] generating shared ca certs ...
	I0807 19:22:17.670259  655176 certs.go:226] acquiring lock for ca certs: {Name:mk02e7ae9d01c8374822222c07f7572b27877c45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:22:17.670377  655176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-443116/.minikube/ca.key
	I0807 19:22:17.670417  655176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-443116/.minikube/proxy-client-ca.key
	I0807 19:22:17.670424  655176 certs.go:256] generating profile certs ...
	I0807 19:22:17.670504  655176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.key
	I0807 19:22:17.670566  655176 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/apiserver.key.cf368c16
	I0807 19:22:17.670611  655176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/proxy-client.key
	I0807 19:22:17.670726  655176 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/448488.pem (1338 bytes)
	W0807 19:22:17.670756  655176 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-443116/.minikube/certs/448488_empty.pem, impossibly tiny 0 bytes
	I0807 19:22:17.670764  655176 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca-key.pem (1675 bytes)
	I0807 19:22:17.670790  655176 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem (1082 bytes)
	I0807 19:22:17.670810  655176 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/cert.pem (1123 bytes)
	I0807 19:22:17.670831  655176 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/key.pem (1675 bytes)
	I0807 19:22:17.670873  655176 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-443116/.minikube/files/etc/ssl/certs/4484882.pem (1708 bytes)
	I0807 19:22:17.672213  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 19:22:17.755379  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 19:22:17.780666  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 19:22:17.822192  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0807 19:22:17.868633  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0807 19:22:17.914186  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 19:22:17.964086  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 19:22:18.026997  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 19:22:18.063060  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/files/etc/ssl/certs/4484882.pem --> /usr/share/ca-certificates/4484882.pem (1708 bytes)
	I0807 19:22:18.093538  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 19:22:18.125836  655176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/certs/448488.pem --> /usr/share/ca-certificates/448488.pem (1338 bytes)
	I0807 19:22:18.165525  655176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 19:22:18.190217  655176 ssh_runner.go:195] Run: openssl version
	I0807 19:22:18.198603  655176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/448488.pem && ln -fs /usr/share/ca-certificates/448488.pem /etc/ssl/certs/448488.pem"
	I0807 19:22:18.211281  655176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/448488.pem
	I0807 19:22:18.216322  655176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:41 /usr/share/ca-certificates/448488.pem
	I0807 19:22:18.216421  655176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/448488.pem
	I0807 19:22:18.225174  655176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/448488.pem /etc/ssl/certs/51391683.0"
	I0807 19:22:18.235585  655176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4484882.pem && ln -fs /usr/share/ca-certificates/4484882.pem /etc/ssl/certs/4484882.pem"
	I0807 19:22:18.247337  655176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4484882.pem
	I0807 19:22:18.252578  655176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:41 /usr/share/ca-certificates/4484882.pem
	I0807 19:22:18.252754  655176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4484882.pem
	I0807 19:22:18.263471  655176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4484882.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 19:22:18.276463  655176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 19:22:18.290178  655176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:22:18.294330  655176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:22:18.294436  655176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:22:18.302823  655176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 19:22:18.314595  655176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 19:22:18.319785  655176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 19:22:18.327509  655176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 19:22:18.335332  655176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 19:22:18.343641  655176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 19:22:18.351328  655176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 19:22:18.358966  655176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 19:22:18.366661  655176 kubeadm.go:392] StartCluster: {Name:old-k8s-version-145103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145103 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:22:18.366816  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0807 19:22:18.366908  655176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 19:22:18.420106  655176 cri.go:89] found id: "6e6c60154928b7200ebdc9a3e446964a9486060a56e068fc228fc1cf486ea9f2"
	I0807 19:22:18.420183  655176 cri.go:89] found id: "1d7137ab81f756c634b0c7c58cd62f9d324463ad793954a60c9479bdca07c1d9"
	I0807 19:22:18.420202  655176 cri.go:89] found id: "6af7f4ba59ec4933af846ac8fb724839bcb93483b78c1f90c46de2a26750e184"
	I0807 19:22:18.420226  655176 cri.go:89] found id: "6fa7a0f941cdfdb925109b190e19418627e84b3541aca1a841d45a2170aab263"
	I0807 19:22:18.420244  655176 cri.go:89] found id: "b297faa73cadffc51a549e10c40129c612c36462461c40cbdb9bc641d6ee9a07"
	I0807 19:22:18.420262  655176 cri.go:89] found id: "3e7229b91d01277de26c3bdbed648db1659a47ae7a01f17a25604535059e3b69"
	I0807 19:22:18.420280  655176 cri.go:89] found id: "9528fcb65a1d0f25a2c40a2d86715330fc6214aec182d07f3aaaf17856447d71"
	I0807 19:22:18.420297  655176 cri.go:89] found id: "9cc157f961887e16a46adadfe9ca360b16c7c88fa3fb260387fefcf3cbfbbc3f"
	I0807 19:22:18.420314  655176 cri.go:89] found id: ""
	I0807 19:22:18.420403  655176 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0807 19:22:18.435646  655176 cri.go:116] JSON = null
	W0807 19:22:18.435750  655176 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0807 19:22:18.435832  655176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 19:22:18.452915  655176 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0807 19:22:18.452983  655176 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0807 19:22:18.453055  655176 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0807 19:22:18.464433  655176 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0807 19:22:18.464916  655176 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-145103" does not appear in /home/jenkins/minikube-integration/19389-443116/kubeconfig
	I0807 19:22:18.465073  655176 kubeconfig.go:62] /home/jenkins/minikube-integration/19389-443116/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-145103" cluster setting kubeconfig missing "old-k8s-version-145103" context setting]
	I0807 19:22:18.465410  655176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/kubeconfig: {Name:mk6f3c27977886608fc27ecd6788b53bded2f437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:22:18.466990  655176 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0807 19:22:18.482620  655176 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0807 19:22:18.482697  655176 kubeadm.go:597] duration metric: took 29.693177ms to restartPrimaryControlPlane
	I0807 19:22:18.482725  655176 kubeadm.go:394] duration metric: took 116.075266ms to StartCluster
	I0807 19:22:18.482756  655176 settings.go:142] acquiring lock: {Name:mkf40f234ddca073ac593f3a60c7a02738b6a34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:22:18.482838  655176 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19389-443116/kubeconfig
	I0807 19:22:18.483506  655176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/kubeconfig: {Name:mk6f3c27977886608fc27ecd6788b53bded2f437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:22:18.483757  655176 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0807 19:22:18.484178  655176 config.go:182] Loaded profile config "old-k8s-version-145103": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0807 19:22:18.484205  655176 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0807 19:22:18.484357  655176 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-145103"
	I0807 19:22:18.484380  655176 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-145103"
	W0807 19:22:18.484387  655176 addons.go:243] addon storage-provisioner should already be in state true
	I0807 19:22:18.484411  655176 host.go:66] Checking if "old-k8s-version-145103" exists ...
	I0807 19:22:18.484552  655176 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-145103"
	I0807 19:22:18.484584  655176 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-145103"
	I0807 19:22:18.484837  655176 cli_runner.go:164] Run: docker container inspect old-k8s-version-145103 --format={{.State.Status}}
	I0807 19:22:18.484926  655176 cli_runner.go:164] Run: docker container inspect old-k8s-version-145103 --format={{.State.Status}}
	I0807 19:22:18.485862  655176 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-145103"
	I0807 19:22:18.485925  655176 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-145103"
	W0807 19:22:18.485947  655176 addons.go:243] addon metrics-server should already be in state true
	I0807 19:22:18.486030  655176 host.go:66] Checking if "old-k8s-version-145103" exists ...
	I0807 19:22:18.486546  655176 cli_runner.go:164] Run: docker container inspect old-k8s-version-145103 --format={{.State.Status}}
	I0807 19:22:18.487229  655176 addons.go:69] Setting dashboard=true in profile "old-k8s-version-145103"
	I0807 19:22:18.487458  655176 addons.go:234] Setting addon dashboard=true in "old-k8s-version-145103"
	W0807 19:22:18.487479  655176 addons.go:243] addon dashboard should already be in state true
	I0807 19:22:18.487511  655176 host.go:66] Checking if "old-k8s-version-145103" exists ...
	I0807 19:22:18.487955  655176 cli_runner.go:164] Run: docker container inspect old-k8s-version-145103 --format={{.State.Status}}
	I0807 19:22:18.487428  655176 out.go:177] * Verifying Kubernetes components...
	I0807 19:22:18.496765  655176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:22:18.536826  655176 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 19:22:18.544676  655176 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 19:22:18.544700  655176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 19:22:18.544768  655176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145103
	I0807 19:22:18.567371  655176 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0807 19:22:18.569943  655176 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0807 19:22:18.571055  655176 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-145103"
	W0807 19:22:18.571068  655176 addons.go:243] addon default-storageclass should already be in state true
	I0807 19:22:18.571093  655176 host.go:66] Checking if "old-k8s-version-145103" exists ...
	I0807 19:22:18.571488  655176 cli_runner.go:164] Run: docker container inspect old-k8s-version-145103 --format={{.State.Status}}
	I0807 19:22:18.576054  655176 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0807 19:22:18.576085  655176 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0807 19:22:18.576161  655176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145103
	I0807 19:22:18.584595  655176 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0807 19:22:18.586477  655176 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0807 19:22:18.586501  655176 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0807 19:22:18.586580  655176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145103
	I0807 19:22:18.614341  655176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/old-k8s-version-145103/id_rsa Username:docker}
	I0807 19:22:18.653832  655176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/old-k8s-version-145103/id_rsa Username:docker}
	I0807 19:22:18.660960  655176 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 19:22:18.660980  655176 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 19:22:18.661052  655176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145103
	I0807 19:22:18.662430  655176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/old-k8s-version-145103/id_rsa Username:docker}
	I0807 19:22:18.696472  655176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/old-k8s-version-145103/id_rsa Username:docker}
	I0807 19:22:18.718755  655176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:22:18.779803  655176 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-145103" to be "Ready" ...
	I0807 19:22:18.821707  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 19:22:18.927828  655176 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0807 19:22:18.927925  655176 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0807 19:22:18.952319  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 19:22:18.958638  655176 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0807 19:22:18.958662  655176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0807 19:22:18.987415  655176 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0807 19:22:18.987443  655176 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0807 19:22:19.021754  655176 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0807 19:22:19.021779  655176 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	W0807 19:22:19.052767  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:19.052805  655176 retry.go:31] will retry after 341.997761ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:19.082233  655176 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0807 19:22:19.082261  655176 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0807 19:22:19.090506  655176 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0807 19:22:19.090528  655176 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0807 19:22:19.124512  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0807 19:22:19.161606  655176 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0807 19:22:19.161633  655176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0807 19:22:19.219017  655176 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0807 19:22:19.219043  655176 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0807 19:22:19.271286  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:19.271318  655176 retry.go:31] will retry after 163.583734ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:19.283337  655176 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0807 19:22:19.283369  655176 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0807 19:22:19.319838  655176 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0807 19:22:19.319873  655176 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0807 19:22:19.335365  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:19.335397  655176 retry.go:31] will retry after 335.498494ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:19.354310  655176 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0807 19:22:19.354340  655176 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0807 19:22:19.375069  655176 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0807 19:22:19.375096  655176 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0807 19:22:19.395320  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 19:22:19.398043  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0807 19:22:19.435159  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0807 19:22:19.624439  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:19.624521  655176 retry.go:31] will retry after 147.843902ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0807 19:22:19.624592  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:19.624616  655176 retry.go:31] will retry after 261.349959ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0807 19:22:19.669464  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:19.669553  655176 retry.go:31] will retry after 478.68039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:19.671793  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0807 19:22:19.763112  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:19.763155  655176 retry.go:31] will retry after 270.355646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:19.773513  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0807 19:22:19.878664  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:19.878710  655176 retry.go:31] will retry after 331.615983ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:19.886813  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0807 19:22:20.022295  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:20.022344  655176 retry.go:31] will retry after 547.125511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:20.034726  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0807 19:22:20.148464  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0807 19:22:20.153457  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:20.153541  655176 retry.go:31] will retry after 836.678222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:20.210710  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0807 19:22:20.260922  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:20.260956  655176 retry.go:31] will retry after 579.409776ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0807 19:22:20.342415  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:20.342450  655176 retry.go:31] will retry after 799.35383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:20.569735  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0807 19:22:20.676316  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:20.676421  655176 retry.go:31] will retry after 1.159192782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:20.781111  655176 node_ready.go:53] error getting node "old-k8s-version-145103": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-145103": dial tcp 192.168.85.2:8443: connect: connection refused
	I0807 19:22:20.841333  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0807 19:22:20.939824  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:20.939910  655176 retry.go:31] will retry after 1.259157822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:20.991058  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0807 19:22:21.097513  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:21.097619  655176 retry.go:31] will retry after 592.600014ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:21.142978  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0807 19:22:21.250268  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:21.250359  655176 retry.go:31] will retry after 648.778622ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:21.690503  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0807 19:22:21.781431  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:21.781505  655176 retry.go:31] will retry after 1.55789434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:21.836771  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 19:22:21.899371  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0807 19:22:21.937838  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:21.937918  655176 retry.go:31] will retry after 823.504792ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0807 19:22:22.027106  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:22.027141  655176 retry.go:31] will retry after 930.504344ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:22.199523  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0807 19:22:22.330261  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:22.330296  655176 retry.go:31] will retry after 1.066715819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:22.761906  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0807 19:22:22.869859  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:22.869894  655176 retry.go:31] will retry after 1.292639421s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:22.958260  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0807 19:22:23.050666  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:23.050702  655176 retry.go:31] will retry after 2.008688182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:23.280470  655176 node_ready.go:53] error getting node "old-k8s-version-145103": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-145103": dial tcp 192.168.85.2:8443: connect: connection refused
	I0807 19:22:23.339829  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0807 19:22:23.397316  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0807 19:22:23.463152  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:23.463193  655176 retry.go:31] will retry after 2.391958007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0807 19:22:23.542126  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:23.542155  655176 retry.go:31] will retry after 1.646014678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:24.163474  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0807 19:22:24.271812  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:24.271850  655176 retry.go:31] will retry after 2.933589895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:25.060118  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0807 19:22:25.170200  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:25.170243  655176 retry.go:31] will retry after 1.613336146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:25.188616  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0807 19:22:25.295579  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:25.295610  655176 retry.go:31] will retry after 3.296911005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:25.780370  655176 node_ready.go:53] error getting node "old-k8s-version-145103": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-145103": dial tcp 192.168.85.2:8443: connect: connection refused
	I0807 19:22:25.855826  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0807 19:22:26.007371  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:26.007414  655176 retry.go:31] will retry after 1.978541891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:26.784410  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0807 19:22:26.871416  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:26.871446  655176 retry.go:31] will retry after 3.939256977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:27.205642  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0807 19:22:27.420384  655176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:27.420475  655176 retry.go:31] will retry after 5.176120887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0807 19:22:27.781278  655176 node_ready.go:53] error getting node "old-k8s-version-145103": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-145103": dial tcp 192.168.85.2:8443: connect: connection refused
	I0807 19:22:27.986681  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0807 19:22:28.593670  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0807 19:22:30.811343  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0807 19:22:32.597416  655176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 19:22:38.281839  655176 node_ready.go:53] error getting node "old-k8s-version-145103": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-145103": net/http: TLS handshake timeout
	I0807 19:22:38.916570  655176 node_ready.go:49] node "old-k8s-version-145103" has status "Ready":"True"
	I0807 19:22:38.916663  655176 node_ready.go:38] duration metric: took 20.136815619s for node "old-k8s-version-145103" to be "Ready" ...
	I0807 19:22:38.916688  655176 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 19:22:39.430392  655176 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-pnlz9" in "kube-system" namespace to be "Ready" ...
	I0807 19:22:39.606425  655176 pod_ready.go:92] pod "coredns-74ff55c5b-pnlz9" in "kube-system" namespace has status "Ready":"True"
	I0807 19:22:39.606502  655176 pod_ready.go:81] duration metric: took 176.033017ms for pod "coredns-74ff55c5b-pnlz9" in "kube-system" namespace to be "Ready" ...
	I0807 19:22:39.606530  655176 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-145103" in "kube-system" namespace to be "Ready" ...
	I0807 19:22:39.646081  655176 pod_ready.go:92] pod "etcd-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"True"
	I0807 19:22:39.646144  655176 pod_ready.go:81] duration metric: took 39.591873ms for pod "etcd-old-k8s-version-145103" in "kube-system" namespace to be "Ready" ...
	I0807 19:22:39.646181  655176 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-145103" in "kube-system" namespace to be "Ready" ...
	I0807 19:22:41.667375  655176 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:22:41.702793  655176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.716073705s)
	I0807 19:22:41.702885  655176 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-145103"
	I0807 19:22:41.702948  655176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (13.109204167s)
	I0807 19:22:41.703465  655176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.10592675s)
	I0807 19:22:41.703552  655176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.892112444s)
	I0807 19:22:41.705298  655176 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-145103 addons enable metrics-server
	
	I0807 19:22:41.719680  655176 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0807 19:22:41.721638  655176 addons.go:510] duration metric: took 23.23743323s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0807 19:22:44.153583  655176 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:22:46.155944  655176 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:22:48.729366  655176 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:22:51.153816  655176 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:22:51.652628  655176 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"True"
	I0807 19:22:51.652655  655176 pod_ready.go:81] duration metric: took 12.0064534s for pod "kube-apiserver-old-k8s-version-145103" in "kube-system" namespace to be "Ready" ...
	I0807 19:22:51.652667  655176 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace to be "Ready" ...
	I0807 19:22:53.660240  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:22:56.159679  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:22:58.164548  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:00.244748  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:02.659863  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:04.663126  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:07.165796  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:09.660215  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:11.660513  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:14.159845  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:16.160643  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:18.658811  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:20.659104  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:22.659187  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:24.659629  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:27.158974  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:29.161489  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:31.659369  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:33.660146  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:35.660943  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:38.159998  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:40.161706  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:42.163760  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:44.659896  655176 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:46.659189  655176 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"True"
	I0807 19:23:46.659219  655176 pod_ready.go:81] duration metric: took 55.006543667s for pod "kube-controller-manager-old-k8s-version-145103" in "kube-system" namespace to be "Ready" ...
	I0807 19:23:46.659233  655176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nk57r" in "kube-system" namespace to be "Ready" ...
	I0807 19:23:46.665070  655176 pod_ready.go:92] pod "kube-proxy-nk57r" in "kube-system" namespace has status "Ready":"True"
	I0807 19:23:46.665097  655176 pod_ready.go:81] duration metric: took 5.856494ms for pod "kube-proxy-nk57r" in "kube-system" namespace to be "Ready" ...
	I0807 19:23:46.665109  655176 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-145103" in "kube-system" namespace to be "Ready" ...
	I0807 19:23:48.671258  655176 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:50.671830  655176 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:53.172261  655176 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:55.672550  655176 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:23:58.171666  655176 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:00.193685  655176 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:01.671899  655176 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-145103" in "kube-system" namespace has status "Ready":"True"
	I0807 19:24:01.671932  655176 pod_ready.go:81] duration metric: took 15.006813982s for pod "kube-scheduler-old-k8s-version-145103" in "kube-system" namespace to be "Ready" ...
	I0807 19:24:01.671945  655176 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace to be "Ready" ...
	I0807 19:24:03.677636  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:05.677795  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:07.679422  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:10.179930  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:12.180110  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:14.678124  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:17.179632  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:19.678402  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:22.179926  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:24.679051  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:27.178393  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:29.678216  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:31.678679  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:34.178617  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:36.677944  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:38.678378  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:41.178656  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:43.679345  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:46.178112  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:48.178467  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:50.179356  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:52.677981  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:54.678949  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:56.679659  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:24:59.177886  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:01.179045  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:03.679130  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:06.178177  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:08.178397  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:10.681705  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:13.177892  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:15.178083  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:17.180591  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:19.181679  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:21.678180  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:23.678382  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:25.678906  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:27.679245  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:30.179743  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:32.677921  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:34.678522  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:36.678589  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:38.678886  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:41.178585  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:43.679752  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:46.179547  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:48.678008  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:50.678978  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:53.178769  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:55.179713  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:57.678179  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:25:59.678223  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:02.178266  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:04.178709  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:06.679340  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:08.682687  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:11.179372  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:13.678848  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:16.179021  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:18.679133  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:21.178470  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:23.179844  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:25.678510  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:27.679060  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:30.179183  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:32.679038  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:35.178887  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:37.678412  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:40.178226  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:42.180893  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:44.678011  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:47.177477  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:49.178129  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:51.178646  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:53.178734  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:55.179302  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:26:57.678910  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:00.226157  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:02.678944  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:04.679074  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:07.178876  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:09.678535  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:11.679154  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:14.178931  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:16.677838  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:18.678003  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:21.177920  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:23.179203  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:25.678572  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:27.678682  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:30.180111  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:32.678066  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:35.180750  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:37.679882  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:40.179025  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:42.179636  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:44.678326  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:46.679948  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:48.681623  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:51.179470  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:53.678211  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:55.680315  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:58.178866  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:28:00.210436  655176 pod_ready.go:102] pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace has status "Ready":"False"
	I0807 19:28:01.679179  655176 pod_ready.go:81] duration metric: took 4m0.007218815s for pod "metrics-server-9975d5f86-g5777" in "kube-system" namespace to be "Ready" ...
	E0807 19:28:01.679207  655176 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0807 19:28:01.679218  655176 pod_ready.go:38] duration metric: took 5m22.762488453s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 19:28:01.679233  655176 api_server.go:52] waiting for apiserver process to appear ...
	I0807 19:28:01.679263  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0807 19:28:01.679327  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0807 19:28:01.718688  655176 cri.go:89] found id: "3bc865fe065ed3ad03b284fb01361f63156a204b7ae0b28683c17e870ac4fc4e"
	I0807 19:28:01.718710  655176 cri.go:89] found id: "b297faa73cadffc51a549e10c40129c612c36462461c40cbdb9bc641d6ee9a07"
	I0807 19:28:01.718714  655176 cri.go:89] found id: ""
	I0807 19:28:01.718721  655176 logs.go:276] 2 containers: [3bc865fe065ed3ad03b284fb01361f63156a204b7ae0b28683c17e870ac4fc4e b297faa73cadffc51a549e10c40129c612c36462461c40cbdb9bc641d6ee9a07]
	I0807 19:28:01.718782  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:01.722489  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:01.725930  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0807 19:28:01.726058  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0807 19:28:01.771862  655176 cri.go:89] found id: "3eedb6e8f6840ac29e27007777d3751b9d3c3115d81b0c7922ea53ca5bdf40b0"
	I0807 19:28:01.771881  655176 cri.go:89] found id: "9cc157f961887e16a46adadfe9ca360b16c7c88fa3fb260387fefcf3cbfbbc3f"
	I0807 19:28:01.771886  655176 cri.go:89] found id: ""
	I0807 19:28:01.771893  655176 logs.go:276] 2 containers: [3eedb6e8f6840ac29e27007777d3751b9d3c3115d81b0c7922ea53ca5bdf40b0 9cc157f961887e16a46adadfe9ca360b16c7c88fa3fb260387fefcf3cbfbbc3f]
	I0807 19:28:01.771952  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:01.775850  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:01.779158  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0807 19:28:01.779250  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0807 19:28:01.817166  655176 cri.go:89] found id: "71b67ec384ee3491a9aedfb1907bf632c692e24d5bd42dc00d79abb448997faa"
	I0807 19:28:01.817191  655176 cri.go:89] found id: "6e6c60154928b7200ebdc9a3e446964a9486060a56e068fc228fc1cf486ea9f2"
	I0807 19:28:01.817196  655176 cri.go:89] found id: ""
	I0807 19:28:01.817203  655176 logs.go:276] 2 containers: [71b67ec384ee3491a9aedfb1907bf632c692e24d5bd42dc00d79abb448997faa 6e6c60154928b7200ebdc9a3e446964a9486060a56e068fc228fc1cf486ea9f2]
	I0807 19:28:01.817288  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:01.821079  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:01.824628  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0807 19:28:01.824726  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0807 19:28:01.867496  655176 cri.go:89] found id: "6bf62d68898142ab7b24d33e793c1c5ee47a83b90edd2641cba394a846742a58"
	I0807 19:28:01.867561  655176 cri.go:89] found id: "9528fcb65a1d0f25a2c40a2d86715330fc6214aec182d07f3aaaf17856447d71"
	I0807 19:28:01.867580  655176 cri.go:89] found id: ""
	I0807 19:28:01.867603  655176 logs.go:276] 2 containers: [6bf62d68898142ab7b24d33e793c1c5ee47a83b90edd2641cba394a846742a58 9528fcb65a1d0f25a2c40a2d86715330fc6214aec182d07f3aaaf17856447d71]
	I0807 19:28:01.867685  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:01.871242  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:01.874773  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0807 19:28:01.874873  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0807 19:28:01.924402  655176 cri.go:89] found id: "ef0324338dc1e165f57b48b6697cdfdd38e0a716c5db953f324a34d8b8b07a4a"
	I0807 19:28:01.924427  655176 cri.go:89] found id: "6fa7a0f941cdfdb925109b190e19418627e84b3541aca1a841d45a2170aab263"
	I0807 19:28:01.924432  655176 cri.go:89] found id: ""
	I0807 19:28:01.924439  655176 logs.go:276] 2 containers: [ef0324338dc1e165f57b48b6697cdfdd38e0a716c5db953f324a34d8b8b07a4a 6fa7a0f941cdfdb925109b190e19418627e84b3541aca1a841d45a2170aab263]
	I0807 19:28:01.924499  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:01.928247  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:01.932177  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0807 19:28:01.932248  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0807 19:28:01.976974  655176 cri.go:89] found id: "feec356a13b99aa000a5eb5efe3d70a5b72bd5a70b7158a40648afc4cb27eadf"
	I0807 19:28:01.977041  655176 cri.go:89] found id: "3e7229b91d01277de26c3bdbed648db1659a47ae7a01f17a25604535059e3b69"
	I0807 19:28:01.977053  655176 cri.go:89] found id: ""
	I0807 19:28:01.977062  655176 logs.go:276] 2 containers: [feec356a13b99aa000a5eb5efe3d70a5b72bd5a70b7158a40648afc4cb27eadf 3e7229b91d01277de26c3bdbed648db1659a47ae7a01f17a25604535059e3b69]
	I0807 19:28:01.977133  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:01.981018  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:01.984907  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0807 19:28:01.985014  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0807 19:28:02.026098  655176 cri.go:89] found id: "c546073d53684c34a050dfc7fc09a0893a42322936b7996cf13dc99f189689dc"
	I0807 19:28:02.026120  655176 cri.go:89] found id: "1d7137ab81f756c634b0c7c58cd62f9d324463ad793954a60c9479bdca07c1d9"
	I0807 19:28:02.026126  655176 cri.go:89] found id: ""
	I0807 19:28:02.026133  655176 logs.go:276] 2 containers: [c546073d53684c34a050dfc7fc09a0893a42322936b7996cf13dc99f189689dc 1d7137ab81f756c634b0c7c58cd62f9d324463ad793954a60c9479bdca07c1d9]
	I0807 19:28:02.026189  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:02.029939  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:02.034556  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0807 19:28:02.034673  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0807 19:28:02.077207  655176 cri.go:89] found id: "1f71dfd46d47d861349904b991e501ae81c87e5f99aa08ea12edaf13977fd3ef"
	I0807 19:28:02.077247  655176 cri.go:89] found id: "c39d2dd3e3af4ce2f603cdcb5ffba311c3e583e21a32a79940c52420d20e73c2"
	I0807 19:28:02.077252  655176 cri.go:89] found id: ""
	I0807 19:28:02.077260  655176 logs.go:276] 2 containers: [1f71dfd46d47d861349904b991e501ae81c87e5f99aa08ea12edaf13977fd3ef c39d2dd3e3af4ce2f603cdcb5ffba311c3e583e21a32a79940c52420d20e73c2]
	I0807 19:28:02.077346  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:02.081186  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:02.084671  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0807 19:28:02.084781  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0807 19:28:02.122760  655176 cri.go:89] found id: "c969c3eda055d4e537185b68a93ca50ae4c4f1bd8623727c4836ae5049aaa92f"
	I0807 19:28:02.122825  655176 cri.go:89] found id: ""
	I0807 19:28:02.122847  655176 logs.go:276] 1 containers: [c969c3eda055d4e537185b68a93ca50ae4c4f1bd8623727c4836ae5049aaa92f]
	I0807 19:28:02.122928  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:02.126495  655176 logs.go:123] Gathering logs for storage-provisioner [1f71dfd46d47d861349904b991e501ae81c87e5f99aa08ea12edaf13977fd3ef] ...
	I0807 19:28:02.126522  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f71dfd46d47d861349904b991e501ae81c87e5f99aa08ea12edaf13977fd3ef"
	I0807 19:28:02.174909  655176 logs.go:123] Gathering logs for containerd ...
	I0807 19:28:02.174943  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0807 19:28:02.235250  655176 logs.go:123] Gathering logs for kube-scheduler [6bf62d68898142ab7b24d33e793c1c5ee47a83b90edd2641cba394a846742a58] ...
	I0807 19:28:02.235333  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf62d68898142ab7b24d33e793c1c5ee47a83b90edd2641cba394a846742a58"
	I0807 19:28:02.280021  655176 logs.go:123] Gathering logs for kube-scheduler [9528fcb65a1d0f25a2c40a2d86715330fc6214aec182d07f3aaaf17856447d71] ...
	I0807 19:28:02.280092  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9528fcb65a1d0f25a2c40a2d86715330fc6214aec182d07f3aaaf17856447d71"
	I0807 19:28:02.324188  655176 logs.go:123] Gathering logs for kube-proxy [ef0324338dc1e165f57b48b6697cdfdd38e0a716c5db953f324a34d8b8b07a4a] ...
	I0807 19:28:02.324269  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef0324338dc1e165f57b48b6697cdfdd38e0a716c5db953f324a34d8b8b07a4a"
	I0807 19:28:02.383651  655176 logs.go:123] Gathering logs for coredns [71b67ec384ee3491a9aedfb1907bf632c692e24d5bd42dc00d79abb448997faa] ...
	I0807 19:28:02.383682  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71b67ec384ee3491a9aedfb1907bf632c692e24d5bd42dc00d79abb448997faa"
	I0807 19:28:02.421351  655176 logs.go:123] Gathering logs for coredns [6e6c60154928b7200ebdc9a3e446964a9486060a56e068fc228fc1cf486ea9f2] ...
	I0807 19:28:02.421381  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6c60154928b7200ebdc9a3e446964a9486060a56e068fc228fc1cf486ea9f2"
	I0807 19:28:02.460050  655176 logs.go:123] Gathering logs for storage-provisioner [c39d2dd3e3af4ce2f603cdcb5ffba311c3e583e21a32a79940c52420d20e73c2] ...
	I0807 19:28:02.460130  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c39d2dd3e3af4ce2f603cdcb5ffba311c3e583e21a32a79940c52420d20e73c2"
	I0807 19:28:02.497318  655176 logs.go:123] Gathering logs for kindnet [c546073d53684c34a050dfc7fc09a0893a42322936b7996cf13dc99f189689dc] ...
	I0807 19:28:02.497348  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c546073d53684c34a050dfc7fc09a0893a42322936b7996cf13dc99f189689dc"
	I0807 19:28:02.566134  655176 logs.go:123] Gathering logs for kubernetes-dashboard [c969c3eda055d4e537185b68a93ca50ae4c4f1bd8623727c4836ae5049aaa92f] ...
	I0807 19:28:02.566168  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c969c3eda055d4e537185b68a93ca50ae4c4f1bd8623727c4836ae5049aaa92f"
	I0807 19:28:02.614272  655176 logs.go:123] Gathering logs for kubelet ...
	I0807 19:28:02.614307  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0807 19:28:02.672340  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.928728     667 reflector.go:138] object-"kube-system"/"storage-provisioner-token-zfj7r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-zfj7r" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:02.672645  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.928825     667 reflector.go:138] object-"kube-system"/"kube-proxy-token-kjh9f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-kjh9f" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:02.672866  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.928873     667 reflector.go:138] object-"kube-system"/"kindnet-token-nxw2r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-nxw2r" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:02.673073  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.928927     667 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:02.673291  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.940024     667 reflector.go:138] object-"default"/"default-token-zdfst": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-zdfst" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:02.673516  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.940096     667 reflector.go:138] object-"kube-system"/"metrics-server-token-zkf4x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-zkf4x" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:02.673731  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.940143     667 reflector.go:138] object-"kube-system"/"coredns-token-wzbvq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-wzbvq" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:02.673931  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.940191     667 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:02.682361  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:42 old-k8s-version-145103 kubelet[667]: E0807 19:22:42.347184     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:02.682553  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:42 old-k8s-version-145103 kubelet[667]: E0807 19:22:42.918907     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.685668  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:57 old-k8s-version-145103 kubelet[667]: E0807 19:22:57.517610     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:02.686091  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:00 old-k8s-version-145103 kubelet[667]: E0807 19:23:00.001804     667 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-qhtqd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-qhtqd" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:02.689027  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:08 old-k8s-version-145103 kubelet[667]: E0807 19:23:08.509924     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.689488  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:09 old-k8s-version-145103 kubelet[667]: E0807 19:23:09.027546     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.689817  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:10 old-k8s-version-145103 kubelet[667]: E0807 19:23:10.033699     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.690257  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:13 old-k8s-version-145103 kubelet[667]: E0807 19:23:13.047356     667 pod_workers.go:191] Error syncing pod e63be88a-9706-4c13-ab97-8b04c5a9e516 ("storage-provisioner_kube-system(e63be88a-9706-4c13-ab97-8b04c5a9e516)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e63be88a-9706-4c13-ab97-8b04c5a9e516)"
	W0807 19:28:02.691179  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:19 old-k8s-version-145103 kubelet[667]: E0807 19:23:19.085460     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.693673  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:20 old-k8s-version-145103 kubelet[667]: E0807 19:23:20.522246     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:02.694137  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:28 old-k8s-version-145103 kubelet[667]: E0807 19:23:28.742763     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.694328  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:31 old-k8s-version-145103 kubelet[667]: E0807 19:23:31.505245     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.694948  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:43 old-k8s-version-145103 kubelet[667]: E0807 19:23:43.151855     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.695139  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:44 old-k8s-version-145103 kubelet[667]: E0807 19:23:44.505345     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.695471  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:48 old-k8s-version-145103 kubelet[667]: E0807 19:23:48.742421     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.695657  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:57 old-k8s-version-145103 kubelet[667]: E0807 19:23:57.505213     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.696006  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:04 old-k8s-version-145103 kubelet[667]: E0807 19:24:04.505088     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.698490  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:12 old-k8s-version-145103 kubelet[667]: E0807 19:24:12.520104     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:02.698826  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:19 old-k8s-version-145103 kubelet[667]: E0807 19:24:19.504866     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.699012  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:23 old-k8s-version-145103 kubelet[667]: E0807 19:24:23.505072     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.699620  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:33 old-k8s-version-145103 kubelet[667]: E0807 19:24:33.304075     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.699807  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:35 old-k8s-version-145103 kubelet[667]: E0807 19:24:35.505146     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.700134  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:38 old-k8s-version-145103 kubelet[667]: E0807 19:24:38.742889     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.700324  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:47 old-k8s-version-145103 kubelet[667]: E0807 19:24:47.513250     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.700659  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:51 old-k8s-version-145103 kubelet[667]: E0807 19:24:51.504838     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.700849  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:01 old-k8s-version-145103 kubelet[667]: E0807 19:25:01.505233     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.701180  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:04 old-k8s-version-145103 kubelet[667]: E0807 19:25:04.504768     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.701364  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:12 old-k8s-version-145103 kubelet[667]: E0807 19:25:12.505672     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.701723  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:15 old-k8s-version-145103 kubelet[667]: E0807 19:25:15.504848     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.701910  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:24 old-k8s-version-145103 kubelet[667]: E0807 19:25:24.508383     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.702739  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:26 old-k8s-version-145103 kubelet[667]: E0807 19:25:26.506014     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.705256  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:39 old-k8s-version-145103 kubelet[667]: E0807 19:25:39.513837     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:02.705588  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:41 old-k8s-version-145103 kubelet[667]: E0807 19:25:41.504881     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.705918  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:52 old-k8s-version-145103 kubelet[667]: E0807 19:25:52.508931     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.706103  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:53 old-k8s-version-145103 kubelet[667]: E0807 19:25:53.506119     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.706690  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:04 old-k8s-version-145103 kubelet[667]: E0807 19:26:04.550949     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.706877  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:06 old-k8s-version-145103 kubelet[667]: E0807 19:26:06.505835     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.707205  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:08 old-k8s-version-145103 kubelet[667]: E0807 19:26:08.743279     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.707393  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:17 old-k8s-version-145103 kubelet[667]: E0807 19:26:17.505160     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.707719  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:22 old-k8s-version-145103 kubelet[667]: E0807 19:26:22.505291     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.707904  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:28 old-k8s-version-145103 kubelet[667]: E0807 19:26:28.505099     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.708228  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:36 old-k8s-version-145103 kubelet[667]: E0807 19:26:36.506064     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.708444  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:40 old-k8s-version-145103 kubelet[667]: E0807 19:26:40.509650     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.708779  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:48 old-k8s-version-145103 kubelet[667]: E0807 19:26:48.505788     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.708972  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:55 old-k8s-version-145103 kubelet[667]: E0807 19:26:55.505139     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.709297  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:00 old-k8s-version-145103 kubelet[667]: E0807 19:27:00.509785     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.709486  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:10 old-k8s-version-145103 kubelet[667]: E0807 19:27:10.505231     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.709815  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:15 old-k8s-version-145103 kubelet[667]: E0807 19:27:15.504826     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.709999  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:24 old-k8s-version-145103 kubelet[667]: E0807 19:27:24.505344     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.710325  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:26 old-k8s-version-145103 kubelet[667]: E0807 19:27:26.511078     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.710510  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:35 old-k8s-version-145103 kubelet[667]: E0807 19:27:35.505097     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.710835  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:39 old-k8s-version-145103 kubelet[667]: E0807 19:27:39.505134     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.711020  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:47 old-k8s-version-145103 kubelet[667]: E0807 19:27:47.505474     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:02.711346  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:54 old-k8s-version-145103 kubelet[667]: E0807 19:27:54.507623     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:02.711534  655176 logs.go:138] Found kubelet problem: Aug 07 19:28:00 old-k8s-version-145103 kubelet[667]: E0807 19:28:00.505524     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0807 19:28:02.711545  655176 logs.go:123] Gathering logs for kube-apiserver [b297faa73cadffc51a549e10c40129c612c36462461c40cbdb9bc641d6ee9a07] ...
	I0807 19:28:02.711560  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b297faa73cadffc51a549e10c40129c612c36462461c40cbdb9bc641d6ee9a07"
	I0807 19:28:02.791030  655176 logs.go:123] Gathering logs for etcd [9cc157f961887e16a46adadfe9ca360b16c7c88fa3fb260387fefcf3cbfbbc3f] ...
	I0807 19:28:02.791066  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cc157f961887e16a46adadfe9ca360b16c7c88fa3fb260387fefcf3cbfbbc3f"
	I0807 19:28:02.843697  655176 logs.go:123] Gathering logs for etcd [3eedb6e8f6840ac29e27007777d3751b9d3c3115d81b0c7922ea53ca5bdf40b0] ...
	I0807 19:28:02.843728  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eedb6e8f6840ac29e27007777d3751b9d3c3115d81b0c7922ea53ca5bdf40b0"
	I0807 19:28:02.887815  655176 logs.go:123] Gathering logs for kube-proxy [6fa7a0f941cdfdb925109b190e19418627e84b3541aca1a841d45a2170aab263] ...
	I0807 19:28:02.887842  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa7a0f941cdfdb925109b190e19418627e84b3541aca1a841d45a2170aab263"
	I0807 19:28:02.929161  655176 logs.go:123] Gathering logs for kube-controller-manager [feec356a13b99aa000a5eb5efe3d70a5b72bd5a70b7158a40648afc4cb27eadf] ...
	I0807 19:28:02.929197  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 feec356a13b99aa000a5eb5efe3d70a5b72bd5a70b7158a40648afc4cb27eadf"
	I0807 19:28:02.988735  655176 logs.go:123] Gathering logs for kube-controller-manager [3e7229b91d01277de26c3bdbed648db1659a47ae7a01f17a25604535059e3b69] ...
	I0807 19:28:02.988778  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e7229b91d01277de26c3bdbed648db1659a47ae7a01f17a25604535059e3b69"
	I0807 19:28:03.052526  655176 logs.go:123] Gathering logs for kindnet [1d7137ab81f756c634b0c7c58cd62f9d324463ad793954a60c9479bdca07c1d9] ...
	I0807 19:28:03.052563  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d7137ab81f756c634b0c7c58cd62f9d324463ad793954a60c9479bdca07c1d9"
	I0807 19:28:03.105962  655176 logs.go:123] Gathering logs for dmesg ...
	I0807 19:28:03.106002  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 19:28:03.126652  655176 logs.go:123] Gathering logs for describe nodes ...
	I0807 19:28:03.126686  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 19:28:03.290031  655176 logs.go:123] Gathering logs for kube-apiserver [3bc865fe065ed3ad03b284fb01361f63156a204b7ae0b28683c17e870ac4fc4e] ...
	I0807 19:28:03.290067  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bc865fe065ed3ad03b284fb01361f63156a204b7ae0b28683c17e870ac4fc4e"
	I0807 19:28:03.366835  655176 logs.go:123] Gathering logs for container status ...
	I0807 19:28:03.366884  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 19:28:03.426391  655176 out.go:304] Setting ErrFile to fd 2...
	I0807 19:28:03.426417  655176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0807 19:28:03.426474  655176 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0807 19:28:03.426485  655176 out.go:239]   Aug 07 19:27:35 old-k8s-version-145103 kubelet[667]: E0807 19:27:35.505097     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 07 19:27:35 old-k8s-version-145103 kubelet[667]: E0807 19:27:35.505097     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:03.426493  655176 out.go:239]   Aug 07 19:27:39 old-k8s-version-145103 kubelet[667]: E0807 19:27:39.505134     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	  Aug 07 19:27:39 old-k8s-version-145103 kubelet[667]: E0807 19:27:39.505134     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:03.426507  655176 out.go:239]   Aug 07 19:27:47 old-k8s-version-145103 kubelet[667]: E0807 19:27:47.505474     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 07 19:27:47 old-k8s-version-145103 kubelet[667]: E0807 19:27:47.505474     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:03.426514  655176 out.go:239]   Aug 07 19:27:54 old-k8s-version-145103 kubelet[667]: E0807 19:27:54.507623     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	  Aug 07 19:27:54 old-k8s-version-145103 kubelet[667]: E0807 19:27:54.507623     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:03.426523  655176 out.go:239]   Aug 07 19:28:00 old-k8s-version-145103 kubelet[667]: E0807 19:28:00.505524     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 07 19:28:00 old-k8s-version-145103 kubelet[667]: E0807 19:28:00.505524     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0807 19:28:03.426534  655176 out.go:304] Setting ErrFile to fd 2...
	I0807 19:28:03.426540  655176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:28:13.428444  655176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 19:28:13.443143  655176 api_server.go:72] duration metric: took 5m54.959319358s to wait for apiserver process to appear ...
	I0807 19:28:13.443176  655176 api_server.go:88] waiting for apiserver healthz status ...
	I0807 19:28:13.443214  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0807 19:28:13.443276  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0807 19:28:13.496365  655176 cri.go:89] found id: "3bc865fe065ed3ad03b284fb01361f63156a204b7ae0b28683c17e870ac4fc4e"
	I0807 19:28:13.496385  655176 cri.go:89] found id: "b297faa73cadffc51a549e10c40129c612c36462461c40cbdb9bc641d6ee9a07"
	I0807 19:28:13.496390  655176 cri.go:89] found id: ""
	I0807 19:28:13.496397  655176 logs.go:276] 2 containers: [3bc865fe065ed3ad03b284fb01361f63156a204b7ae0b28683c17e870ac4fc4e b297faa73cadffc51a549e10c40129c612c36462461c40cbdb9bc641d6ee9a07]
	I0807 19:28:13.496455  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.500912  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.505348  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0807 19:28:13.505414  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0807 19:28:13.557603  655176 cri.go:89] found id: "3eedb6e8f6840ac29e27007777d3751b9d3c3115d81b0c7922ea53ca5bdf40b0"
	I0807 19:28:13.557627  655176 cri.go:89] found id: "9cc157f961887e16a46adadfe9ca360b16c7c88fa3fb260387fefcf3cbfbbc3f"
	I0807 19:28:13.557632  655176 cri.go:89] found id: ""
	I0807 19:28:13.557639  655176 logs.go:276] 2 containers: [3eedb6e8f6840ac29e27007777d3751b9d3c3115d81b0c7922ea53ca5bdf40b0 9cc157f961887e16a46adadfe9ca360b16c7c88fa3fb260387fefcf3cbfbbc3f]
	I0807 19:28:13.557696  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.562268  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.566799  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0807 19:28:13.566873  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0807 19:28:13.615878  655176 cri.go:89] found id: "71b67ec384ee3491a9aedfb1907bf632c692e24d5bd42dc00d79abb448997faa"
	I0807 19:28:13.615951  655176 cri.go:89] found id: "6e6c60154928b7200ebdc9a3e446964a9486060a56e068fc228fc1cf486ea9f2"
	I0807 19:28:13.615970  655176 cri.go:89] found id: ""
	I0807 19:28:13.615991  655176 logs.go:276] 2 containers: [71b67ec384ee3491a9aedfb1907bf632c692e24d5bd42dc00d79abb448997faa 6e6c60154928b7200ebdc9a3e446964a9486060a56e068fc228fc1cf486ea9f2]
	I0807 19:28:13.616084  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.620596  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.624529  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0807 19:28:13.624653  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0807 19:28:13.719260  655176 cri.go:89] found id: "6bf62d68898142ab7b24d33e793c1c5ee47a83b90edd2641cba394a846742a58"
	I0807 19:28:13.719333  655176 cri.go:89] found id: "9528fcb65a1d0f25a2c40a2d86715330fc6214aec182d07f3aaaf17856447d71"
	I0807 19:28:13.719351  655176 cri.go:89] found id: ""
	I0807 19:28:13.719388  655176 logs.go:276] 2 containers: [6bf62d68898142ab7b24d33e793c1c5ee47a83b90edd2641cba394a846742a58 9528fcb65a1d0f25a2c40a2d86715330fc6214aec182d07f3aaaf17856447d71]
	I0807 19:28:13.719479  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.722951  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.731571  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0807 19:28:13.731706  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0807 19:28:13.844993  655176 cri.go:89] found id: "ef0324338dc1e165f57b48b6697cdfdd38e0a716c5db953f324a34d8b8b07a4a"
	I0807 19:28:13.845013  655176 cri.go:89] found id: "6fa7a0f941cdfdb925109b190e19418627e84b3541aca1a841d45a2170aab263"
	I0807 19:28:13.845018  655176 cri.go:89] found id: ""
	I0807 19:28:13.845025  655176 logs.go:276] 2 containers: [ef0324338dc1e165f57b48b6697cdfdd38e0a716c5db953f324a34d8b8b07a4a 6fa7a0f941cdfdb925109b190e19418627e84b3541aca1a841d45a2170aab263]
	I0807 19:28:13.845082  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.853107  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.857475  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0807 19:28:13.857617  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0807 19:28:13.923536  655176 cri.go:89] found id: "feec356a13b99aa000a5eb5efe3d70a5b72bd5a70b7158a40648afc4cb27eadf"
	I0807 19:28:13.923561  655176 cri.go:89] found id: "3e7229b91d01277de26c3bdbed648db1659a47ae7a01f17a25604535059e3b69"
	I0807 19:28:13.923565  655176 cri.go:89] found id: ""
	I0807 19:28:13.923572  655176 logs.go:276] 2 containers: [feec356a13b99aa000a5eb5efe3d70a5b72bd5a70b7158a40648afc4cb27eadf 3e7229b91d01277de26c3bdbed648db1659a47ae7a01f17a25604535059e3b69]
	I0807 19:28:13.923637  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.927175  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.930487  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0807 19:28:13.930564  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0807 19:28:13.994745  655176 cri.go:89] found id: "c546073d53684c34a050dfc7fc09a0893a42322936b7996cf13dc99f189689dc"
	I0807 19:28:13.994766  655176 cri.go:89] found id: "1d7137ab81f756c634b0c7c58cd62f9d324463ad793954a60c9479bdca07c1d9"
	I0807 19:28:13.994771  655176 cri.go:89] found id: ""
	I0807 19:28:13.994777  655176 logs.go:276] 2 containers: [c546073d53684c34a050dfc7fc09a0893a42322936b7996cf13dc99f189689dc 1d7137ab81f756c634b0c7c58cd62f9d324463ad793954a60c9479bdca07c1d9]
	I0807 19:28:13.994832  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.998984  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:14.003122  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0807 19:28:14.003225  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0807 19:28:14.062570  655176 cri.go:89] found id: "c969c3eda055d4e537185b68a93ca50ae4c4f1bd8623727c4836ae5049aaa92f"
	I0807 19:28:14.062589  655176 cri.go:89] found id: ""
	I0807 19:28:14.062597  655176 logs.go:276] 1 containers: [c969c3eda055d4e537185b68a93ca50ae4c4f1bd8623727c4836ae5049aaa92f]
	I0807 19:28:14.062658  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:14.067064  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0807 19:28:14.067143  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0807 19:28:14.122535  655176 cri.go:89] found id: "1f71dfd46d47d861349904b991e501ae81c87e5f99aa08ea12edaf13977fd3ef"
	I0807 19:28:14.122560  655176 cri.go:89] found id: "c39d2dd3e3af4ce2f603cdcb5ffba311c3e583e21a32a79940c52420d20e73c2"
	I0807 19:28:14.122565  655176 cri.go:89] found id: ""
	I0807 19:28:14.122572  655176 logs.go:276] 2 containers: [1f71dfd46d47d861349904b991e501ae81c87e5f99aa08ea12edaf13977fd3ef c39d2dd3e3af4ce2f603cdcb5ffba311c3e583e21a32a79940c52420d20e73c2]
	I0807 19:28:14.122630  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:14.126766  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:14.130991  655176 logs.go:123] Gathering logs for kube-scheduler [6bf62d68898142ab7b24d33e793c1c5ee47a83b90edd2641cba394a846742a58] ...
	I0807 19:28:14.131020  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf62d68898142ab7b24d33e793c1c5ee47a83b90edd2641cba394a846742a58"
	I0807 19:28:14.186679  655176 logs.go:123] Gathering logs for kindnet [1d7137ab81f756c634b0c7c58cd62f9d324463ad793954a60c9479bdca07c1d9] ...
	I0807 19:28:14.186708  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d7137ab81f756c634b0c7c58cd62f9d324463ad793954a60c9479bdca07c1d9"
	I0807 19:28:14.262236  655176 logs.go:123] Gathering logs for etcd [9cc157f961887e16a46adadfe9ca360b16c7c88fa3fb260387fefcf3cbfbbc3f] ...
	I0807 19:28:14.262283  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cc157f961887e16a46adadfe9ca360b16c7c88fa3fb260387fefcf3cbfbbc3f"
	I0807 19:28:14.337272  655176 logs.go:123] Gathering logs for dmesg ...
	I0807 19:28:14.337306  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 19:28:14.361031  655176 logs.go:123] Gathering logs for describe nodes ...
	I0807 19:28:14.361058  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 19:28:14.569220  655176 logs.go:123] Gathering logs for kube-apiserver [3bc865fe065ed3ad03b284fb01361f63156a204b7ae0b28683c17e870ac4fc4e] ...
	I0807 19:28:14.569250  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bc865fe065ed3ad03b284fb01361f63156a204b7ae0b28683c17e870ac4fc4e"
	I0807 19:28:14.696016  655176 logs.go:123] Gathering logs for coredns [71b67ec384ee3491a9aedfb1907bf632c692e24d5bd42dc00d79abb448997faa] ...
	I0807 19:28:14.696074  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71b67ec384ee3491a9aedfb1907bf632c692e24d5bd42dc00d79abb448997faa"
	I0807 19:28:14.743532  655176 logs.go:123] Gathering logs for kube-scheduler [9528fcb65a1d0f25a2c40a2d86715330fc6214aec182d07f3aaaf17856447d71] ...
	I0807 19:28:14.743562  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9528fcb65a1d0f25a2c40a2d86715330fc6214aec182d07f3aaaf17856447d71"
	I0807 19:28:14.789012  655176 logs.go:123] Gathering logs for kube-proxy [6fa7a0f941cdfdb925109b190e19418627e84b3541aca1a841d45a2170aab263] ...
	I0807 19:28:14.789049  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa7a0f941cdfdb925109b190e19418627e84b3541aca1a841d45a2170aab263"
	I0807 19:28:14.833036  655176 logs.go:123] Gathering logs for kube-controller-manager [feec356a13b99aa000a5eb5efe3d70a5b72bd5a70b7158a40648afc4cb27eadf] ...
	I0807 19:28:14.833065  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 feec356a13b99aa000a5eb5efe3d70a5b72bd5a70b7158a40648afc4cb27eadf"
	I0807 19:28:14.945069  655176 logs.go:123] Gathering logs for kubelet ...
	I0807 19:28:14.945103  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0807 19:28:15.019860  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.928728     667 reflector.go:138] object-"kube-system"/"storage-provisioner-token-zfj7r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-zfj7r" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.020109  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.928825     667 reflector.go:138] object-"kube-system"/"kube-proxy-token-kjh9f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-kjh9f" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.020325  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.928873     667 reflector.go:138] object-"kube-system"/"kindnet-token-nxw2r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-nxw2r" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.020584  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.928927     667 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.020798  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.940024     667 reflector.go:138] object-"default"/"default-token-zdfst": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-zdfst" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.021026  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.940096     667 reflector.go:138] object-"kube-system"/"metrics-server-token-zkf4x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-zkf4x" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.021475  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.940143     667 reflector.go:138] object-"kube-system"/"coredns-token-wzbvq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-wzbvq" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.021740  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.940191     667 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.032180  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:42 old-k8s-version-145103 kubelet[667]: E0807 19:22:42.347184     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:15.032444  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:42 old-k8s-version-145103 kubelet[667]: E0807 19:22:42.918907     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.035608  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:57 old-k8s-version-145103 kubelet[667]: E0807 19:22:57.517610     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:15.036025  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:00 old-k8s-version-145103 kubelet[667]: E0807 19:23:00.001804     667 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-qhtqd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-qhtqd" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.038985  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:08 old-k8s-version-145103 kubelet[667]: E0807 19:23:08.509924     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.039490  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:09 old-k8s-version-145103 kubelet[667]: E0807 19:23:09.027546     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.039822  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:10 old-k8s-version-145103 kubelet[667]: E0807 19:23:10.033699     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.040296  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:13 old-k8s-version-145103 kubelet[667]: E0807 19:23:13.047356     667 pod_workers.go:191] Error syncing pod e63be88a-9706-4c13-ab97-8b04c5a9e516 ("storage-provisioner_kube-system(e63be88a-9706-4c13-ab97-8b04c5a9e516)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e63be88a-9706-4c13-ab97-8b04c5a9e516)"
	W0807 19:28:15.041304  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:19 old-k8s-version-145103 kubelet[667]: E0807 19:23:19.085460     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.044416  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:20 old-k8s-version-145103 kubelet[667]: E0807 19:23:20.522246     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:15.045673  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:28 old-k8s-version-145103 kubelet[667]: E0807 19:23:28.742763     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.045977  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:31 old-k8s-version-145103 kubelet[667]: E0807 19:23:31.505245     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.046847  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:43 old-k8s-version-145103 kubelet[667]: E0807 19:23:43.151855     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.047122  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:44 old-k8s-version-145103 kubelet[667]: E0807 19:23:44.505345     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.047468  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:48 old-k8s-version-145103 kubelet[667]: E0807 19:23:48.742421     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.047706  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:57 old-k8s-version-145103 kubelet[667]: E0807 19:23:57.505213     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.048407  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:04 old-k8s-version-145103 kubelet[667]: E0807 19:24:04.505088     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.051988  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:12 old-k8s-version-145103 kubelet[667]: E0807 19:24:12.520104     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:15.052580  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:19 old-k8s-version-145103 kubelet[667]: E0807 19:24:19.504866     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.052851  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:23 old-k8s-version-145103 kubelet[667]: E0807 19:24:23.505072     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.053501  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:33 old-k8s-version-145103 kubelet[667]: E0807 19:24:33.304075     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.053701  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:35 old-k8s-version-145103 kubelet[667]: E0807 19:24:35.505146     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.054163  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:38 old-k8s-version-145103 kubelet[667]: E0807 19:24:38.742889     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.054354  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:47 old-k8s-version-145103 kubelet[667]: E0807 19:24:47.513250     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.055079  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:51 old-k8s-version-145103 kubelet[667]: E0807 19:24:51.504838     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.055282  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:01 old-k8s-version-145103 kubelet[667]: E0807 19:25:01.505233     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.055793  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:04 old-k8s-version-145103 kubelet[667]: E0807 19:25:04.504768     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.056104  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:12 old-k8s-version-145103 kubelet[667]: E0807 19:25:12.505672     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.056515  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:15 old-k8s-version-145103 kubelet[667]: E0807 19:25:15.504848     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.056806  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:24 old-k8s-version-145103 kubelet[667]: E0807 19:25:24.508383     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.058934  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:26 old-k8s-version-145103 kubelet[667]: E0807 19:25:26.506014     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.062836  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:39 old-k8s-version-145103 kubelet[667]: E0807 19:25:39.513837     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:15.063315  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:41 old-k8s-version-145103 kubelet[667]: E0807 19:25:41.504881     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.063940  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:52 old-k8s-version-145103 kubelet[667]: E0807 19:25:52.508931     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.064261  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:53 old-k8s-version-145103 kubelet[667]: E0807 19:25:53.506119     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.064992  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:04 old-k8s-version-145103 kubelet[667]: E0807 19:26:04.550949     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.065191  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:06 old-k8s-version-145103 kubelet[667]: E0807 19:26:06.505835     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.065801  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:08 old-k8s-version-145103 kubelet[667]: E0807 19:26:08.743279     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.066029  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:17 old-k8s-version-145103 kubelet[667]: E0807 19:26:17.505160     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.066372  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:22 old-k8s-version-145103 kubelet[667]: E0807 19:26:22.505291     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.066561  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:28 old-k8s-version-145103 kubelet[667]: E0807 19:26:28.505099     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.067050  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:36 old-k8s-version-145103 kubelet[667]: E0807 19:26:36.506064     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.067298  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:40 old-k8s-version-145103 kubelet[667]: E0807 19:26:40.509650     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.067672  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:48 old-k8s-version-145103 kubelet[667]: E0807 19:26:48.505788     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.067861  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:55 old-k8s-version-145103 kubelet[667]: E0807 19:26:55.505139     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.068519  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:00 old-k8s-version-145103 kubelet[667]: E0807 19:27:00.509785     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.068737  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:10 old-k8s-version-145103 kubelet[667]: E0807 19:27:10.505231     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.069150  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:15 old-k8s-version-145103 kubelet[667]: E0807 19:27:15.504826     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.069340  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:24 old-k8s-version-145103 kubelet[667]: E0807 19:27:24.505344     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.069668  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:26 old-k8s-version-145103 kubelet[667]: E0807 19:27:26.511078     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.070028  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:35 old-k8s-version-145103 kubelet[667]: E0807 19:27:35.505097     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.070428  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:39 old-k8s-version-145103 kubelet[667]: E0807 19:27:39.505134     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.070621  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:47 old-k8s-version-145103 kubelet[667]: E0807 19:27:47.505474     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.070950  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:54 old-k8s-version-145103 kubelet[667]: E0807 19:27:54.507623     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.071135  655176 logs.go:138] Found kubelet problem: Aug 07 19:28:00 old-k8s-version-145103 kubelet[667]: E0807 19:28:00.505524     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.071621  655176 logs.go:138] Found kubelet problem: Aug 07 19:28:07 old-k8s-version-145103 kubelet[667]: E0807 19:28:07.504777     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.071865  655176 logs.go:138] Found kubelet problem: Aug 07 19:28:12 old-k8s-version-145103 kubelet[667]: E0807 19:28:12.505296     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0807 19:28:15.071883  655176 logs.go:123] Gathering logs for kubernetes-dashboard [c969c3eda055d4e537185b68a93ca50ae4c4f1bd8623727c4836ae5049aaa92f] ...
	I0807 19:28:15.071930  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c969c3eda055d4e537185b68a93ca50ae4c4f1bd8623727c4836ae5049aaa92f"
	I0807 19:28:15.137599  655176 logs.go:123] Gathering logs for containerd ...
	I0807 19:28:15.137636  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0807 19:28:15.226911  655176 logs.go:123] Gathering logs for kindnet [c546073d53684c34a050dfc7fc09a0893a42322936b7996cf13dc99f189689dc] ...
	I0807 19:28:15.226952  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c546073d53684c34a050dfc7fc09a0893a42322936b7996cf13dc99f189689dc"
	I0807 19:28:15.322954  655176 logs.go:123] Gathering logs for coredns [6e6c60154928b7200ebdc9a3e446964a9486060a56e068fc228fc1cf486ea9f2] ...
	I0807 19:28:15.322998  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6c60154928b7200ebdc9a3e446964a9486060a56e068fc228fc1cf486ea9f2"
	I0807 19:28:15.415047  655176 logs.go:123] Gathering logs for kube-proxy [ef0324338dc1e165f57b48b6697cdfdd38e0a716c5db953f324a34d8b8b07a4a] ...
	I0807 19:28:15.415075  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef0324338dc1e165f57b48b6697cdfdd38e0a716c5db953f324a34d8b8b07a4a"
	I0807 19:28:15.494761  655176 logs.go:123] Gathering logs for storage-provisioner [c39d2dd3e3af4ce2f603cdcb5ffba311c3e583e21a32a79940c52420d20e73c2] ...
	I0807 19:28:15.494795  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c39d2dd3e3af4ce2f603cdcb5ffba311c3e583e21a32a79940c52420d20e73c2"
	I0807 19:28:15.554000  655176 logs.go:123] Gathering logs for container status ...
	I0807 19:28:15.554027  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 19:28:15.626977  655176 logs.go:123] Gathering logs for kube-apiserver [b297faa73cadffc51a549e10c40129c612c36462461c40cbdb9bc641d6ee9a07] ...
	I0807 19:28:15.627011  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b297faa73cadffc51a549e10c40129c612c36462461c40cbdb9bc641d6ee9a07"
	I0807 19:28:15.737052  655176 logs.go:123] Gathering logs for kube-controller-manager [3e7229b91d01277de26c3bdbed648db1659a47ae7a01f17a25604535059e3b69] ...
	I0807 19:28:15.737136  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e7229b91d01277de26c3bdbed648db1659a47ae7a01f17a25604535059e3b69"
	I0807 19:28:15.817365  655176 logs.go:123] Gathering logs for storage-provisioner [1f71dfd46d47d861349904b991e501ae81c87e5f99aa08ea12edaf13977fd3ef] ...
	I0807 19:28:15.817414  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f71dfd46d47d861349904b991e501ae81c87e5f99aa08ea12edaf13977fd3ef"
	I0807 19:28:15.875323  655176 logs.go:123] Gathering logs for etcd [3eedb6e8f6840ac29e27007777d3751b9d3c3115d81b0c7922ea53ca5bdf40b0] ...
	I0807 19:28:15.875353  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eedb6e8f6840ac29e27007777d3751b9d3c3115d81b0c7922ea53ca5bdf40b0"
	I0807 19:28:16.011236  655176 out.go:304] Setting ErrFile to fd 2...
	I0807 19:28:16.011272  655176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0807 19:28:16.011328  655176 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0807 19:28:16.011342  655176 out.go:239]   Aug 07 19:27:47 old-k8s-version-145103 kubelet[667]: E0807 19:27:47.505474     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 07 19:27:47 old-k8s-version-145103 kubelet[667]: E0807 19:27:47.505474     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:16.011351  655176 out.go:239]   Aug 07 19:27:54 old-k8s-version-145103 kubelet[667]: E0807 19:27:54.507623     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	  Aug 07 19:27:54 old-k8s-version-145103 kubelet[667]: E0807 19:27:54.507623     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:16.011362  655176 out.go:239]   Aug 07 19:28:00 old-k8s-version-145103 kubelet[667]: E0807 19:28:00.505524     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 07 19:28:00 old-k8s-version-145103 kubelet[667]: E0807 19:28:00.505524     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:16.011374  655176 out.go:239]   Aug 07 19:28:07 old-k8s-version-145103 kubelet[667]: E0807 19:28:07.504777     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	  Aug 07 19:28:07 old-k8s-version-145103 kubelet[667]: E0807 19:28:07.504777     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:16.011386  655176 out.go:239]   Aug 07 19:28:12 old-k8s-version-145103 kubelet[667]: E0807 19:28:12.505296     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 07 19:28:12 old-k8s-version-145103 kubelet[667]: E0807 19:28:12.505296     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0807 19:28:16.011394  655176 out.go:304] Setting ErrFile to fd 2...
	I0807 19:28:16.011400  655176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:28:26.013214  655176 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0807 19:28:26.025990  655176 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0807 19:28:26.028726  655176 out.go:177] 
	W0807 19:28:26.030644  655176 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0807 19:28:26.030678  655176 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0807 19:28:26.030700  655176 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0807 19:28:26.030706  655176 out.go:239] * 
	* 
	W0807 19:28:26.031891  655176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 19:28:26.034065  655176 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-145103 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-145103
helpers_test.go:235: (dbg) docker inspect old-k8s-version-145103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e2073edc3a8253d685b64f875aef58d62db54f69792716fa2225b484ef5ee45",
	        "Created": "2024-08-07T19:19:30.29487063Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 655388,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-07T19:22:10.500136304Z",
	            "FinishedAt": "2024-08-07T19:22:09.214598065Z"
	        },
	        "Image": "sha256:3c2a9878c3c4bba39f30158565171acf4131a22446ec76f61f10b90a1f2f9e07",
	        "ResolvConfPath": "/var/lib/docker/containers/3e2073edc3a8253d685b64f875aef58d62db54f69792716fa2225b484ef5ee45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e2073edc3a8253d685b64f875aef58d62db54f69792716fa2225b484ef5ee45/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e2073edc3a8253d685b64f875aef58d62db54f69792716fa2225b484ef5ee45/hosts",
	        "LogPath": "/var/lib/docker/containers/3e2073edc3a8253d685b64f875aef58d62db54f69792716fa2225b484ef5ee45/3e2073edc3a8253d685b64f875aef58d62db54f69792716fa2225b484ef5ee45-json.log",
	        "Name": "/old-k8s-version-145103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-145103:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-145103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9f55a232d1d9ec04151614acb3f5595e6c63cb51b2a6c873cf1df58045ffcfd0-init/diff:/var/lib/docker/overlay2/fb306904e51181155093d9f5e1422a0780db1826017288d8ca0dfbf62d428a72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9f55a232d1d9ec04151614acb3f5595e6c63cb51b2a6c873cf1df58045ffcfd0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9f55a232d1d9ec04151614acb3f5595e6c63cb51b2a6c873cf1df58045ffcfd0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9f55a232d1d9ec04151614acb3f5595e6c63cb51b2a6c873cf1df58045ffcfd0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-145103",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-145103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-145103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-145103",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-145103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a4ab73ccd290677f0f3852520ecfa145beefeff6463b19ee915c216a1ed69a63",
	            "SandboxKey": "/var/run/docker/netns/a4ab73ccd290",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-145103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "249b644d44e82618d3f7467329dd95ddf404c50de96f83a05b9b50849518ed8c",
	                    "EndpointID": "da417fb5b1547920a0d3304c31df46ddda86b2dff9000bb97829469c9b47428c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-145103",
	                        "3e2073edc3a8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-145103 -n old-k8s-version-145103
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-145103 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-145103 logs -n 25: (2.601132923s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-863658                              | cert-expiration-863658   | jenkins | v1.33.1 | 07 Aug 24 19:18 UTC | 07 Aug 24 19:18 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-380157                               | force-systemd-env-380157 | jenkins | v1.33.1 | 07 Aug 24 19:18 UTC | 07 Aug 24 19:18 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-380157                            | force-systemd-env-380157 | jenkins | v1.33.1 | 07 Aug 24 19:18 UTC | 07 Aug 24 19:18 UTC |
	| start   | -p cert-options-890209                                 | cert-options-890209      | jenkins | v1.33.1 | 07 Aug 24 19:18 UTC | 07 Aug 24 19:19 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-890209 ssh                                | cert-options-890209      | jenkins | v1.33.1 | 07 Aug 24 19:19 UTC | 07 Aug 24 19:19 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-890209 -- sudo                         | cert-options-890209      | jenkins | v1.33.1 | 07 Aug 24 19:19 UTC | 07 Aug 24 19:19 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-890209                                 | cert-options-890209      | jenkins | v1.33.1 | 07 Aug 24 19:19 UTC | 07 Aug 24 19:19 UTC |
	| start   | -p old-k8s-version-145103                              | old-k8s-version-145103   | jenkins | v1.33.1 | 07 Aug 24 19:19 UTC | 07 Aug 24 19:21 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-863658                              | cert-expiration-863658   | jenkins | v1.33.1 | 07 Aug 24 19:21 UTC | 07 Aug 24 19:21 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-863658                              | cert-expiration-863658   | jenkins | v1.33.1 | 07 Aug 24 19:21 UTC | 07 Aug 24 19:21 UTC |
	| addons  | enable metrics-server -p old-k8s-version-145103        | old-k8s-version-145103   | jenkins | v1.33.1 | 07 Aug 24 19:21 UTC | 07 Aug 24 19:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-145103                              | old-k8s-version-145103   | jenkins | v1.33.1 | 07 Aug 24 19:21 UTC | 07 Aug 24 19:22 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| start   | -p no-preload-708131                                   | no-preload-708131        | jenkins | v1.33.1 | 07 Aug 24 19:21 UTC | 07 Aug 24 19:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-145103             | old-k8s-version-145103   | jenkins | v1.33.1 | 07 Aug 24 19:22 UTC | 07 Aug 24 19:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-145103                              | old-k8s-version-145103   | jenkins | v1.33.1 | 07 Aug 24 19:22 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-708131             | no-preload-708131        | jenkins | v1.33.1 | 07 Aug 24 19:23 UTC | 07 Aug 24 19:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-708131                                   | no-preload-708131        | jenkins | v1.33.1 | 07 Aug 24 19:23 UTC | 07 Aug 24 19:23 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-708131                  | no-preload-708131        | jenkins | v1.33.1 | 07 Aug 24 19:23 UTC | 07 Aug 24 19:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-708131                                   | no-preload-708131        | jenkins | v1.33.1 | 07 Aug 24 19:23 UTC | 07 Aug 24 19:27 UTC |
	|         | --memory=2200 --alsologtostderr                        |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                          |         |         |                     |                     |
	| image   | no-preload-708131 image list                           | no-preload-708131        | jenkins | v1.33.1 | 07 Aug 24 19:28 UTC | 07 Aug 24 19:28 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-708131                                   | no-preload-708131        | jenkins | v1.33.1 | 07 Aug 24 19:28 UTC | 07 Aug 24 19:28 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-708131                                   | no-preload-708131        | jenkins | v1.33.1 | 07 Aug 24 19:28 UTC | 07 Aug 24 19:28 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-708131                                   | no-preload-708131        | jenkins | v1.33.1 | 07 Aug 24 19:28 UTC | 07 Aug 24 19:28 UTC |
	| delete  | -p no-preload-708131                                   | no-preload-708131        | jenkins | v1.33.1 | 07 Aug 24 19:28 UTC | 07 Aug 24 19:28 UTC |
	| start   | -p embed-certs-313116                                  | embed-certs-313116       | jenkins | v1.33.1 | 07 Aug 24 19:28 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 19:28:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 19:28:14.409387  666607 out.go:291] Setting OutFile to fd 1 ...
	I0807 19:28:14.409638  666607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:28:14.409669  666607 out.go:304] Setting ErrFile to fd 2...
	I0807 19:28:14.409688  666607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:28:14.409949  666607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
	I0807 19:28:14.410420  666607 out.go:298] Setting JSON to false
	I0807 19:28:14.411505  666607 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11446,"bootTime":1723047449,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0807 19:28:14.411606  666607 start.go:139] virtualization:  
	I0807 19:28:14.416010  666607 out.go:177] * [embed-certs-313116] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0807 19:28:14.418855  666607 notify.go:220] Checking for updates...
	I0807 19:28:14.419418  666607 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 19:28:14.422353  666607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 19:28:14.424269  666607 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	I0807 19:28:14.426671  666607 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	I0807 19:28:14.428486  666607 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0807 19:28:14.430362  666607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 19:28:14.432949  666607 config.go:182] Loaded profile config "old-k8s-version-145103": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0807 19:28:14.433051  666607 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 19:28:14.459415  666607 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0807 19:28:14.459564  666607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 19:28:14.608484  666607 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-07 19:28:14.597920071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 19:28:14.608602  666607 docker.go:307] overlay module found
	I0807 19:28:14.612044  666607 out.go:177] * Using the docker driver based on user configuration
	I0807 19:28:14.613807  666607 start.go:297] selected driver: docker
	I0807 19:28:14.613840  666607 start.go:901] validating driver "docker" against <nil>
	I0807 19:28:14.613854  666607 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 19:28:14.614668  666607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 19:28:14.691371  666607 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-07 19:28:14.68072347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 19:28:14.691601  666607 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 19:28:14.691928  666607 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 19:28:14.693940  666607 out.go:177] * Using Docker driver with root privileges
	I0807 19:28:14.695915  666607 cni.go:84] Creating CNI manager for ""
	I0807 19:28:14.695934  666607 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0807 19:28:14.695952  666607 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0807 19:28:14.696047  666607 start.go:340] cluster config:
	{Name:embed-certs-313116 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-313116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:28:14.700074  666607 out.go:177] * Starting "embed-certs-313116" primary control-plane node in "embed-certs-313116" cluster
	I0807 19:28:14.701711  666607 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0807 19:28:14.703739  666607 out.go:177] * Pulling base image v0.0.44-1723026928-19389 ...
	I0807 19:28:14.706094  666607 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0807 19:28:14.706149  666607 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	I0807 19:28:14.706165  666607 cache.go:56] Caching tarball of preloaded images
	I0807 19:28:14.706184  666607 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local docker daemon
	I0807 19:28:14.706373  666607 preload.go:172] Found /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0807 19:28:14.706392  666607 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on containerd
	I0807 19:28:14.706502  666607 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/embed-certs-313116/config.json ...
	I0807 19:28:14.706522  666607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/embed-certs-313116/config.json: {Name:mkfc21c70ca4ec4f2ab40f69ab28a6bd1eaa79ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0807 19:28:14.728595  666607 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 is of wrong architecture
	I0807 19:28:14.728613  666607 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 to local cache
	I0807 19:28:14.728687  666607 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local cache directory
	I0807 19:28:14.728704  666607 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local cache directory, skipping pull
	I0807 19:28:14.728708  666607 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 exists in cache, skipping pull
	I0807 19:28:14.728716  666607 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 as a tarball
	I0807 19:28:14.728722  666607 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 from local cache
	I0807 19:28:14.863136  666607 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 from cached tarball
	I0807 19:28:14.863186  666607 cache.go:194] Successfully downloaded all kic artifacts
	I0807 19:28:14.863222  666607 start.go:360] acquireMachinesLock for embed-certs-313116: {Name:mka8e5afa3c10a47525c51a297ac5954176946c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:28:14.863666  666607 start.go:364] duration metric: took 419.635µs to acquireMachinesLock for "embed-certs-313116"
	I0807 19:28:14.863703  666607 start.go:93] Provisioning new machine with config: &{Name:embed-certs-313116 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-313116 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0807 19:28:14.863791  666607 start.go:125] createHost starting for "" (driver="docker")
	I0807 19:28:13.428444  655176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 19:28:13.443143  655176 api_server.go:72] duration metric: took 5m54.959319358s to wait for apiserver process to appear ...
	I0807 19:28:13.443176  655176 api_server.go:88] waiting for apiserver healthz status ...
	I0807 19:28:13.443214  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0807 19:28:13.443276  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0807 19:28:13.496365  655176 cri.go:89] found id: "3bc865fe065ed3ad03b284fb01361f63156a204b7ae0b28683c17e870ac4fc4e"
	I0807 19:28:13.496385  655176 cri.go:89] found id: "b297faa73cadffc51a549e10c40129c612c36462461c40cbdb9bc641d6ee9a07"
	I0807 19:28:13.496390  655176 cri.go:89] found id: ""
	I0807 19:28:13.496397  655176 logs.go:276] 2 containers: [3bc865fe065ed3ad03b284fb01361f63156a204b7ae0b28683c17e870ac4fc4e b297faa73cadffc51a549e10c40129c612c36462461c40cbdb9bc641d6ee9a07]
	I0807 19:28:13.496455  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.500912  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.505348  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0807 19:28:13.505414  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0807 19:28:13.557603  655176 cri.go:89] found id: "3eedb6e8f6840ac29e27007777d3751b9d3c3115d81b0c7922ea53ca5bdf40b0"
	I0807 19:28:13.557627  655176 cri.go:89] found id: "9cc157f961887e16a46adadfe9ca360b16c7c88fa3fb260387fefcf3cbfbbc3f"
	I0807 19:28:13.557632  655176 cri.go:89] found id: ""
	I0807 19:28:13.557639  655176 logs.go:276] 2 containers: [3eedb6e8f6840ac29e27007777d3751b9d3c3115d81b0c7922ea53ca5bdf40b0 9cc157f961887e16a46adadfe9ca360b16c7c88fa3fb260387fefcf3cbfbbc3f]
	I0807 19:28:13.557696  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.562268  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.566799  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0807 19:28:13.566873  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0807 19:28:13.615878  655176 cri.go:89] found id: "71b67ec384ee3491a9aedfb1907bf632c692e24d5bd42dc00d79abb448997faa"
	I0807 19:28:13.615951  655176 cri.go:89] found id: "6e6c60154928b7200ebdc9a3e446964a9486060a56e068fc228fc1cf486ea9f2"
	I0807 19:28:13.615970  655176 cri.go:89] found id: ""
	I0807 19:28:13.615991  655176 logs.go:276] 2 containers: [71b67ec384ee3491a9aedfb1907bf632c692e24d5bd42dc00d79abb448997faa 6e6c60154928b7200ebdc9a3e446964a9486060a56e068fc228fc1cf486ea9f2]
	I0807 19:28:13.616084  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.620596  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.624529  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0807 19:28:13.624653  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0807 19:28:13.719260  655176 cri.go:89] found id: "6bf62d68898142ab7b24d33e793c1c5ee47a83b90edd2641cba394a846742a58"
	I0807 19:28:13.719333  655176 cri.go:89] found id: "9528fcb65a1d0f25a2c40a2d86715330fc6214aec182d07f3aaaf17856447d71"
	I0807 19:28:13.719351  655176 cri.go:89] found id: ""
	I0807 19:28:13.719388  655176 logs.go:276] 2 containers: [6bf62d68898142ab7b24d33e793c1c5ee47a83b90edd2641cba394a846742a58 9528fcb65a1d0f25a2c40a2d86715330fc6214aec182d07f3aaaf17856447d71]
	I0807 19:28:13.719479  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.722951  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.731571  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0807 19:28:13.731706  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0807 19:28:13.844993  655176 cri.go:89] found id: "ef0324338dc1e165f57b48b6697cdfdd38e0a716c5db953f324a34d8b8b07a4a"
	I0807 19:28:13.845013  655176 cri.go:89] found id: "6fa7a0f941cdfdb925109b190e19418627e84b3541aca1a841d45a2170aab263"
	I0807 19:28:13.845018  655176 cri.go:89] found id: ""
	I0807 19:28:13.845025  655176 logs.go:276] 2 containers: [ef0324338dc1e165f57b48b6697cdfdd38e0a716c5db953f324a34d8b8b07a4a 6fa7a0f941cdfdb925109b190e19418627e84b3541aca1a841d45a2170aab263]
	I0807 19:28:13.845082  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.853107  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.857475  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0807 19:28:13.857617  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0807 19:28:13.923536  655176 cri.go:89] found id: "feec356a13b99aa000a5eb5efe3d70a5b72bd5a70b7158a40648afc4cb27eadf"
	I0807 19:28:13.923561  655176 cri.go:89] found id: "3e7229b91d01277de26c3bdbed648db1659a47ae7a01f17a25604535059e3b69"
	I0807 19:28:13.923565  655176 cri.go:89] found id: ""
	I0807 19:28:13.923572  655176 logs.go:276] 2 containers: [feec356a13b99aa000a5eb5efe3d70a5b72bd5a70b7158a40648afc4cb27eadf 3e7229b91d01277de26c3bdbed648db1659a47ae7a01f17a25604535059e3b69]
	I0807 19:28:13.923637  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.927175  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.930487  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0807 19:28:13.930564  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0807 19:28:13.994745  655176 cri.go:89] found id: "c546073d53684c34a050dfc7fc09a0893a42322936b7996cf13dc99f189689dc"
	I0807 19:28:13.994766  655176 cri.go:89] found id: "1d7137ab81f756c634b0c7c58cd62f9d324463ad793954a60c9479bdca07c1d9"
	I0807 19:28:13.994771  655176 cri.go:89] found id: ""
	I0807 19:28:13.994777  655176 logs.go:276] 2 containers: [c546073d53684c34a050dfc7fc09a0893a42322936b7996cf13dc99f189689dc 1d7137ab81f756c634b0c7c58cd62f9d324463ad793954a60c9479bdca07c1d9]
	I0807 19:28:13.994832  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:13.998984  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:14.003122  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0807 19:28:14.003225  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0807 19:28:14.062570  655176 cri.go:89] found id: "c969c3eda055d4e537185b68a93ca50ae4c4f1bd8623727c4836ae5049aaa92f"
	I0807 19:28:14.062589  655176 cri.go:89] found id: ""
	I0807 19:28:14.062597  655176 logs.go:276] 1 containers: [c969c3eda055d4e537185b68a93ca50ae4c4f1bd8623727c4836ae5049aaa92f]
	I0807 19:28:14.062658  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:14.067064  655176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0807 19:28:14.067143  655176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0807 19:28:14.122535  655176 cri.go:89] found id: "1f71dfd46d47d861349904b991e501ae81c87e5f99aa08ea12edaf13977fd3ef"
	I0807 19:28:14.122560  655176 cri.go:89] found id: "c39d2dd3e3af4ce2f603cdcb5ffba311c3e583e21a32a79940c52420d20e73c2"
	I0807 19:28:14.122565  655176 cri.go:89] found id: ""
	I0807 19:28:14.122572  655176 logs.go:276] 2 containers: [1f71dfd46d47d861349904b991e501ae81c87e5f99aa08ea12edaf13977fd3ef c39d2dd3e3af4ce2f603cdcb5ffba311c3e583e21a32a79940c52420d20e73c2]
	I0807 19:28:14.122630  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:14.126766  655176 ssh_runner.go:195] Run: which crictl
	I0807 19:28:14.130991  655176 logs.go:123] Gathering logs for kube-scheduler [6bf62d68898142ab7b24d33e793c1c5ee47a83b90edd2641cba394a846742a58] ...
	I0807 19:28:14.131020  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf62d68898142ab7b24d33e793c1c5ee47a83b90edd2641cba394a846742a58"
	I0807 19:28:14.186679  655176 logs.go:123] Gathering logs for kindnet [1d7137ab81f756c634b0c7c58cd62f9d324463ad793954a60c9479bdca07c1d9] ...
	I0807 19:28:14.186708  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d7137ab81f756c634b0c7c58cd62f9d324463ad793954a60c9479bdca07c1d9"
	I0807 19:28:14.262236  655176 logs.go:123] Gathering logs for etcd [9cc157f961887e16a46adadfe9ca360b16c7c88fa3fb260387fefcf3cbfbbc3f] ...
	I0807 19:28:14.262283  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cc157f961887e16a46adadfe9ca360b16c7c88fa3fb260387fefcf3cbfbbc3f"
	I0807 19:28:14.337272  655176 logs.go:123] Gathering logs for dmesg ...
	I0807 19:28:14.337306  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 19:28:14.361031  655176 logs.go:123] Gathering logs for describe nodes ...
	I0807 19:28:14.361058  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0807 19:28:14.569220  655176 logs.go:123] Gathering logs for kube-apiserver [3bc865fe065ed3ad03b284fb01361f63156a204b7ae0b28683c17e870ac4fc4e] ...
	I0807 19:28:14.569250  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bc865fe065ed3ad03b284fb01361f63156a204b7ae0b28683c17e870ac4fc4e"
	I0807 19:28:14.696016  655176 logs.go:123] Gathering logs for coredns [71b67ec384ee3491a9aedfb1907bf632c692e24d5bd42dc00d79abb448997faa] ...
	I0807 19:28:14.696074  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71b67ec384ee3491a9aedfb1907bf632c692e24d5bd42dc00d79abb448997faa"
	I0807 19:28:14.743532  655176 logs.go:123] Gathering logs for kube-scheduler [9528fcb65a1d0f25a2c40a2d86715330fc6214aec182d07f3aaaf17856447d71] ...
	I0807 19:28:14.743562  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9528fcb65a1d0f25a2c40a2d86715330fc6214aec182d07f3aaaf17856447d71"
	I0807 19:28:14.789012  655176 logs.go:123] Gathering logs for kube-proxy [6fa7a0f941cdfdb925109b190e19418627e84b3541aca1a841d45a2170aab263] ...
	I0807 19:28:14.789049  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa7a0f941cdfdb925109b190e19418627e84b3541aca1a841d45a2170aab263"
	I0807 19:28:14.833036  655176 logs.go:123] Gathering logs for kube-controller-manager [feec356a13b99aa000a5eb5efe3d70a5b72bd5a70b7158a40648afc4cb27eadf] ...
	I0807 19:28:14.833065  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 feec356a13b99aa000a5eb5efe3d70a5b72bd5a70b7158a40648afc4cb27eadf"
	I0807 19:28:14.866150  666607 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0807 19:28:14.866396  666607 start.go:159] libmachine.API.Create for "embed-certs-313116" (driver="docker")
	I0807 19:28:14.866438  666607 client.go:168] LocalClient.Create starting
	I0807 19:28:14.866513  666607 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem
	I0807 19:28:14.866549  666607 main.go:141] libmachine: Decoding PEM data...
	I0807 19:28:14.866566  666607 main.go:141] libmachine: Parsing certificate...
	I0807 19:28:14.866629  666607 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-443116/.minikube/certs/cert.pem
	I0807 19:28:14.866652  666607 main.go:141] libmachine: Decoding PEM data...
	I0807 19:28:14.866666  666607 main.go:141] libmachine: Parsing certificate...
	I0807 19:28:14.867045  666607 cli_runner.go:164] Run: docker network inspect embed-certs-313116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0807 19:28:14.884057  666607 cli_runner.go:211] docker network inspect embed-certs-313116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0807 19:28:14.884207  666607 network_create.go:284] running [docker network inspect embed-certs-313116] to gather additional debugging logs...
	I0807 19:28:14.884230  666607 cli_runner.go:164] Run: docker network inspect embed-certs-313116
	W0807 19:28:14.937253  666607 cli_runner.go:211] docker network inspect embed-certs-313116 returned with exit code 1
	I0807 19:28:14.937291  666607 network_create.go:287] error running [docker network inspect embed-certs-313116]: docker network inspect embed-certs-313116: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-313116 not found
	I0807 19:28:14.937305  666607 network_create.go:289] output of [docker network inspect embed-certs-313116]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-313116 not found
	
	** /stderr **
	I0807 19:28:14.937405  666607 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0807 19:28:14.958707  666607 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dd8e8fe975ae IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:69:bf:6a:6b} reservation:<nil>}
	I0807 19:28:14.959143  666607 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d7a6b985056f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:8a:a8:2d:c0} reservation:<nil>}
	I0807 19:28:14.959555  666607 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d25e21937b36 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:03:c5:8a:52} reservation:<nil>}
	I0807 19:28:14.960222  666607 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400189c530}
	I0807 19:28:14.960246  666607 network_create.go:124] attempt to create docker network embed-certs-313116 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0807 19:28:14.960407  666607 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-313116 embed-certs-313116
	I0807 19:28:15.085549  666607 network_create.go:108] docker network embed-certs-313116 192.168.76.0/24 created
	I0807 19:28:15.085584  666607 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-313116" container
	I0807 19:28:15.085678  666607 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0807 19:28:15.114236  666607 cli_runner.go:164] Run: docker volume create embed-certs-313116 --label name.minikube.sigs.k8s.io=embed-certs-313116 --label created_by.minikube.sigs.k8s.io=true
	I0807 19:28:15.139380  666607 oci.go:103] Successfully created a docker volume embed-certs-313116
	I0807 19:28:15.139505  666607 cli_runner.go:164] Run: docker run --rm --name embed-certs-313116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-313116 --entrypoint /usr/bin/test -v embed-certs-313116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 -d /var/lib
	I0807 19:28:15.885989  666607 oci.go:107] Successfully prepared a docker volume embed-certs-313116
	I0807 19:28:15.886044  666607 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0807 19:28:15.886065  666607 kic.go:194] Starting extracting preloaded images to volume ...
	I0807 19:28:15.886150  666607 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-313116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0807 19:28:14.945069  655176 logs.go:123] Gathering logs for kubelet ...
	I0807 19:28:14.945103  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0807 19:28:15.019860  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.928728     667 reflector.go:138] object-"kube-system"/"storage-provisioner-token-zfj7r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-zfj7r" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.020109  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.928825     667 reflector.go:138] object-"kube-system"/"kube-proxy-token-kjh9f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-kjh9f" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.020325  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.928873     667 reflector.go:138] object-"kube-system"/"kindnet-token-nxw2r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-nxw2r" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.020584  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.928927     667 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.020798  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.940024     667 reflector.go:138] object-"default"/"default-token-zdfst": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-zdfst" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.021026  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.940096     667 reflector.go:138] object-"kube-system"/"metrics-server-token-zkf4x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-zkf4x" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.021475  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.940143     667 reflector.go:138] object-"kube-system"/"coredns-token-wzbvq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-wzbvq" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.021740  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:38 old-k8s-version-145103 kubelet[667]: E0807 19:22:38.940191     667 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.032180  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:42 old-k8s-version-145103 kubelet[667]: E0807 19:22:42.347184     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:15.032444  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:42 old-k8s-version-145103 kubelet[667]: E0807 19:22:42.918907     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.035608  655176 logs.go:138] Found kubelet problem: Aug 07 19:22:57 old-k8s-version-145103 kubelet[667]: E0807 19:22:57.517610     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:15.036025  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:00 old-k8s-version-145103 kubelet[667]: E0807 19:23:00.001804     667 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-qhtqd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-qhtqd" is forbidden: User "system:node:old-k8s-version-145103" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-145103' and this object
	W0807 19:28:15.038985  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:08 old-k8s-version-145103 kubelet[667]: E0807 19:23:08.509924     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.039490  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:09 old-k8s-version-145103 kubelet[667]: E0807 19:23:09.027546     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.039822  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:10 old-k8s-version-145103 kubelet[667]: E0807 19:23:10.033699     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.040296  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:13 old-k8s-version-145103 kubelet[667]: E0807 19:23:13.047356     667 pod_workers.go:191] Error syncing pod e63be88a-9706-4c13-ab97-8b04c5a9e516 ("storage-provisioner_kube-system(e63be88a-9706-4c13-ab97-8b04c5a9e516)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e63be88a-9706-4c13-ab97-8b04c5a9e516)"
	W0807 19:28:15.041304  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:19 old-k8s-version-145103 kubelet[667]: E0807 19:23:19.085460     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.044416  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:20 old-k8s-version-145103 kubelet[667]: E0807 19:23:20.522246     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:15.045673  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:28 old-k8s-version-145103 kubelet[667]: E0807 19:23:28.742763     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.045977  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:31 old-k8s-version-145103 kubelet[667]: E0807 19:23:31.505245     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.046847  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:43 old-k8s-version-145103 kubelet[667]: E0807 19:23:43.151855     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.047122  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:44 old-k8s-version-145103 kubelet[667]: E0807 19:23:44.505345     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.047468  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:48 old-k8s-version-145103 kubelet[667]: E0807 19:23:48.742421     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.047706  655176 logs.go:138] Found kubelet problem: Aug 07 19:23:57 old-k8s-version-145103 kubelet[667]: E0807 19:23:57.505213     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.048407  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:04 old-k8s-version-145103 kubelet[667]: E0807 19:24:04.505088     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.051988  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:12 old-k8s-version-145103 kubelet[667]: E0807 19:24:12.520104     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:15.052580  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:19 old-k8s-version-145103 kubelet[667]: E0807 19:24:19.504866     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.052851  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:23 old-k8s-version-145103 kubelet[667]: E0807 19:24:23.505072     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.053501  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:33 old-k8s-version-145103 kubelet[667]: E0807 19:24:33.304075     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.053701  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:35 old-k8s-version-145103 kubelet[667]: E0807 19:24:35.505146     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.054163  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:38 old-k8s-version-145103 kubelet[667]: E0807 19:24:38.742889     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.054354  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:47 old-k8s-version-145103 kubelet[667]: E0807 19:24:47.513250     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.055079  655176 logs.go:138] Found kubelet problem: Aug 07 19:24:51 old-k8s-version-145103 kubelet[667]: E0807 19:24:51.504838     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.055282  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:01 old-k8s-version-145103 kubelet[667]: E0807 19:25:01.505233     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.055793  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:04 old-k8s-version-145103 kubelet[667]: E0807 19:25:04.504768     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.056104  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:12 old-k8s-version-145103 kubelet[667]: E0807 19:25:12.505672     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.056515  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:15 old-k8s-version-145103 kubelet[667]: E0807 19:25:15.504848     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.056806  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:24 old-k8s-version-145103 kubelet[667]: E0807 19:25:24.508383     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.058934  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:26 old-k8s-version-145103 kubelet[667]: E0807 19:25:26.506014     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.062836  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:39 old-k8s-version-145103 kubelet[667]: E0807 19:25:39.513837     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0807 19:28:15.063315  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:41 old-k8s-version-145103 kubelet[667]: E0807 19:25:41.504881     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.063940  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:52 old-k8s-version-145103 kubelet[667]: E0807 19:25:52.508931     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.064261  655176 logs.go:138] Found kubelet problem: Aug 07 19:25:53 old-k8s-version-145103 kubelet[667]: E0807 19:25:53.506119     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.064992  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:04 old-k8s-version-145103 kubelet[667]: E0807 19:26:04.550949     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.065191  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:06 old-k8s-version-145103 kubelet[667]: E0807 19:26:06.505835     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.065801  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:08 old-k8s-version-145103 kubelet[667]: E0807 19:26:08.743279     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.066029  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:17 old-k8s-version-145103 kubelet[667]: E0807 19:26:17.505160     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.066372  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:22 old-k8s-version-145103 kubelet[667]: E0807 19:26:22.505291     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.066561  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:28 old-k8s-version-145103 kubelet[667]: E0807 19:26:28.505099     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.067050  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:36 old-k8s-version-145103 kubelet[667]: E0807 19:26:36.506064     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.067298  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:40 old-k8s-version-145103 kubelet[667]: E0807 19:26:40.509650     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.067672  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:48 old-k8s-version-145103 kubelet[667]: E0807 19:26:48.505788     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.067861  655176 logs.go:138] Found kubelet problem: Aug 07 19:26:55 old-k8s-version-145103 kubelet[667]: E0807 19:26:55.505139     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.068519  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:00 old-k8s-version-145103 kubelet[667]: E0807 19:27:00.509785     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.068737  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:10 old-k8s-version-145103 kubelet[667]: E0807 19:27:10.505231     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.069150  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:15 old-k8s-version-145103 kubelet[667]: E0807 19:27:15.504826     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.069340  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:24 old-k8s-version-145103 kubelet[667]: E0807 19:27:24.505344     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.069668  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:26 old-k8s-version-145103 kubelet[667]: E0807 19:27:26.511078     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.070028  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:35 old-k8s-version-145103 kubelet[667]: E0807 19:27:35.505097     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.070428  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:39 old-k8s-version-145103 kubelet[667]: E0807 19:27:39.505134     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.070621  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:47 old-k8s-version-145103 kubelet[667]: E0807 19:27:47.505474     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.070950  655176 logs.go:138] Found kubelet problem: Aug 07 19:27:54 old-k8s-version-145103 kubelet[667]: E0807 19:27:54.507623     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.071135  655176 logs.go:138] Found kubelet problem: Aug 07 19:28:00 old-k8s-version-145103 kubelet[667]: E0807 19:28:00.505524     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:15.071621  655176 logs.go:138] Found kubelet problem: Aug 07 19:28:07 old-k8s-version-145103 kubelet[667]: E0807 19:28:07.504777     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:15.071865  655176 logs.go:138] Found kubelet problem: Aug 07 19:28:12 old-k8s-version-145103 kubelet[667]: E0807 19:28:12.505296     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0807 19:28:15.071883  655176 logs.go:123] Gathering logs for kubernetes-dashboard [c969c3eda055d4e537185b68a93ca50ae4c4f1bd8623727c4836ae5049aaa92f] ...
	I0807 19:28:15.071930  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c969c3eda055d4e537185b68a93ca50ae4c4f1bd8623727c4836ae5049aaa92f"
	I0807 19:28:15.137599  655176 logs.go:123] Gathering logs for containerd ...
	I0807 19:28:15.137636  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0807 19:28:15.226911  655176 logs.go:123] Gathering logs for kindnet [c546073d53684c34a050dfc7fc09a0893a42322936b7996cf13dc99f189689dc] ...
	I0807 19:28:15.226952  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c546073d53684c34a050dfc7fc09a0893a42322936b7996cf13dc99f189689dc"
	I0807 19:28:15.322954  655176 logs.go:123] Gathering logs for coredns [6e6c60154928b7200ebdc9a3e446964a9486060a56e068fc228fc1cf486ea9f2] ...
	I0807 19:28:15.322998  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6c60154928b7200ebdc9a3e446964a9486060a56e068fc228fc1cf486ea9f2"
	I0807 19:28:15.415047  655176 logs.go:123] Gathering logs for kube-proxy [ef0324338dc1e165f57b48b6697cdfdd38e0a716c5db953f324a34d8b8b07a4a] ...
	I0807 19:28:15.415075  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef0324338dc1e165f57b48b6697cdfdd38e0a716c5db953f324a34d8b8b07a4a"
	I0807 19:28:15.494761  655176 logs.go:123] Gathering logs for storage-provisioner [c39d2dd3e3af4ce2f603cdcb5ffba311c3e583e21a32a79940c52420d20e73c2] ...
	I0807 19:28:15.494795  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c39d2dd3e3af4ce2f603cdcb5ffba311c3e583e21a32a79940c52420d20e73c2"
	I0807 19:28:15.554000  655176 logs.go:123] Gathering logs for container status ...
	I0807 19:28:15.554027  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0807 19:28:15.626977  655176 logs.go:123] Gathering logs for kube-apiserver [b297faa73cadffc51a549e10c40129c612c36462461c40cbdb9bc641d6ee9a07] ...
	I0807 19:28:15.627011  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b297faa73cadffc51a549e10c40129c612c36462461c40cbdb9bc641d6ee9a07"
	I0807 19:28:15.737052  655176 logs.go:123] Gathering logs for kube-controller-manager [3e7229b91d01277de26c3bdbed648db1659a47ae7a01f17a25604535059e3b69] ...
	I0807 19:28:15.737136  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e7229b91d01277de26c3bdbed648db1659a47ae7a01f17a25604535059e3b69"
	I0807 19:28:15.817365  655176 logs.go:123] Gathering logs for storage-provisioner [1f71dfd46d47d861349904b991e501ae81c87e5f99aa08ea12edaf13977fd3ef] ...
	I0807 19:28:15.817414  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f71dfd46d47d861349904b991e501ae81c87e5f99aa08ea12edaf13977fd3ef"
	I0807 19:28:15.875323  655176 logs.go:123] Gathering logs for etcd [3eedb6e8f6840ac29e27007777d3751b9d3c3115d81b0c7922ea53ca5bdf40b0] ...
	I0807 19:28:15.875353  655176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eedb6e8f6840ac29e27007777d3751b9d3c3115d81b0c7922ea53ca5bdf40b0"
	I0807 19:28:16.011236  655176 out.go:304] Setting ErrFile to fd 2...
	I0807 19:28:16.011272  655176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0807 19:28:16.011328  655176 out.go:239] X Problems detected in kubelet:
	W0807 19:28:16.011342  655176 out.go:239]   Aug 07 19:27:47 old-k8s-version-145103 kubelet[667]: E0807 19:27:47.505474     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:16.011351  655176 out.go:239]   Aug 07 19:27:54 old-k8s-version-145103 kubelet[667]: E0807 19:27:54.507623     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:16.011362  655176 out.go:239]   Aug 07 19:28:00 old-k8s-version-145103 kubelet[667]: E0807 19:28:00.505524     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0807 19:28:16.011374  655176 out.go:239]   Aug 07 19:28:07 old-k8s-version-145103 kubelet[667]: E0807 19:28:07.504777     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	W0807 19:28:16.011386  655176 out.go:239]   Aug 07 19:28:12 old-k8s-version-145103 kubelet[667]: E0807 19:28:12.505296     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0807 19:28:16.011394  655176 out.go:304] Setting ErrFile to fd 2...
	I0807 19:28:16.011400  655176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:28:21.292948  666607 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-313116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.406726875s)
	I0807 19:28:21.292979  666607 kic.go:203] duration metric: took 5.406910937s to extract preloaded images to volume ...
	W0807 19:28:21.293115  666607 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0807 19:28:21.293225  666607 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0807 19:28:21.355666  666607 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-313116 --name embed-certs-313116 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-313116 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-313116 --network embed-certs-313116 --ip 192.168.76.2 --volume embed-certs-313116:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0
	I0807 19:28:21.710485  666607 cli_runner.go:164] Run: docker container inspect embed-certs-313116 --format={{.State.Running}}
	I0807 19:28:21.736323  666607 cli_runner.go:164] Run: docker container inspect embed-certs-313116 --format={{.State.Status}}
	I0807 19:28:21.761022  666607 cli_runner.go:164] Run: docker exec embed-certs-313116 stat /var/lib/dpkg/alternatives/iptables
	I0807 19:28:21.813759  666607 oci.go:144] the created container "embed-certs-313116" has a running status.
	I0807 19:28:21.813800  666607 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19389-443116/.minikube/machines/embed-certs-313116/id_rsa...
	I0807 19:28:22.324565  666607 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19389-443116/.minikube/machines/embed-certs-313116/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0807 19:28:22.362033  666607 cli_runner.go:164] Run: docker container inspect embed-certs-313116 --format={{.State.Status}}
	I0807 19:28:22.391629  666607 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0807 19:28:22.391659  666607 kic_runner.go:114] Args: [docker exec --privileged embed-certs-313116 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0807 19:28:22.464596  666607 cli_runner.go:164] Run: docker container inspect embed-certs-313116 --format={{.State.Status}}
	I0807 19:28:22.492527  666607 machine.go:94] provisionDockerMachine start ...
	I0807 19:28:22.492619  666607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-313116
	I0807 19:28:22.519689  666607 main.go:141] libmachine: Using SSH client type: native
	I0807 19:28:22.519976  666607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I0807 19:28:22.519984  666607 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 19:28:22.695961  666607 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-313116
	
	I0807 19:28:22.696025  666607 ubuntu.go:169] provisioning hostname "embed-certs-313116"
	I0807 19:28:22.696129  666607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-313116
	I0807 19:28:22.716278  666607 main.go:141] libmachine: Using SSH client type: native
	I0807 19:28:22.716537  666607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I0807 19:28:22.716549  666607 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-313116 && echo "embed-certs-313116" | sudo tee /etc/hostname
	I0807 19:28:22.879129  666607 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-313116
	
	I0807 19:28:22.879215  666607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-313116
	I0807 19:28:22.901490  666607 main.go:141] libmachine: Using SSH client type: native
	I0807 19:28:22.901740  666607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I0807 19:28:22.901766  666607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-313116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-313116/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-313116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 19:28:23.058105  666607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 19:28:23.058131  666607 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19389-443116/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-443116/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-443116/.minikube}
	I0807 19:28:23.058169  666607 ubuntu.go:177] setting up certificates
	I0807 19:28:23.058185  666607 provision.go:84] configureAuth start
	I0807 19:28:23.058255  666607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-313116
	I0807 19:28:23.080433  666607 provision.go:143] copyHostCerts
	I0807 19:28:23.080500  666607 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-443116/.minikube/cert.pem, removing ...
	I0807 19:28:23.080509  666607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-443116/.minikube/cert.pem
	I0807 19:28:23.080589  666607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-443116/.minikube/cert.pem (1123 bytes)
	I0807 19:28:23.080682  666607 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-443116/.minikube/key.pem, removing ...
	I0807 19:28:23.080699  666607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-443116/.minikube/key.pem
	I0807 19:28:23.080731  666607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-443116/.minikube/key.pem (1675 bytes)
	I0807 19:28:23.080795  666607 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-443116/.minikube/ca.pem, removing ...
	I0807 19:28:23.080806  666607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-443116/.minikube/ca.pem
	I0807 19:28:23.080832  666607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-443116/.minikube/ca.pem (1082 bytes)
	I0807 19:28:23.080883  666607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-443116/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca-key.pem org=jenkins.embed-certs-313116 san=[127.0.0.1 192.168.76.2 embed-certs-313116 localhost minikube]
	I0807 19:28:23.446720  666607 provision.go:177] copyRemoteCerts
	I0807 19:28:23.446802  666607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 19:28:23.446847  666607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-313116
	I0807 19:28:23.463722  666607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/embed-certs-313116/id_rsa Username:docker}
	I0807 19:28:23.569435  666607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0807 19:28:23.596749  666607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 19:28:23.623007  666607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 19:28:23.648096  666607 provision.go:87] duration metric: took 589.891481ms to configureAuth
	I0807 19:28:23.648121  666607 ubuntu.go:193] setting minikube options for container-runtime
	I0807 19:28:23.648295  666607 config.go:182] Loaded profile config "embed-certs-313116": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0807 19:28:23.648301  666607 machine.go:97] duration metric: took 1.155757982s to provisionDockerMachine
	I0807 19:28:23.648308  666607 client.go:171] duration metric: took 8.781860536s to LocalClient.Create
	I0807 19:28:23.648321  666607 start.go:167] duration metric: took 8.781927282s to libmachine.API.Create "embed-certs-313116"
	I0807 19:28:23.648328  666607 start.go:293] postStartSetup for "embed-certs-313116" (driver="docker")
	I0807 19:28:23.648337  666607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 19:28:23.648422  666607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 19:28:23.648461  666607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-313116
	I0807 19:28:23.664967  666607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/embed-certs-313116/id_rsa Username:docker}
	I0807 19:28:23.765553  666607 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 19:28:23.768609  666607 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0807 19:28:23.768644  666607 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0807 19:28:23.768656  666607 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0807 19:28:23.768663  666607 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0807 19:28:23.768673  666607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-443116/.minikube/addons for local assets ...
	I0807 19:28:23.768735  666607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-443116/.minikube/files for local assets ...
	I0807 19:28:23.768825  666607 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-443116/.minikube/files/etc/ssl/certs/4484882.pem -> 4484882.pem in /etc/ssl/certs
	I0807 19:28:23.768954  666607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 19:28:23.777584  666607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-443116/.minikube/files/etc/ssl/certs/4484882.pem --> /etc/ssl/certs/4484882.pem (1708 bytes)
	I0807 19:28:23.804328  666607 start.go:296] duration metric: took 155.983561ms for postStartSetup
	I0807 19:28:23.804809  666607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-313116
	I0807 19:28:23.821319  666607 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/embed-certs-313116/config.json ...
	I0807 19:28:23.821616  666607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 19:28:23.821677  666607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-313116
	I0807 19:28:23.839149  666607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/embed-certs-313116/id_rsa Username:docker}
	I0807 19:28:23.933537  666607 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0807 19:28:23.937842  666607 start.go:128] duration metric: took 9.074030754s to createHost
	I0807 19:28:23.937868  666607 start.go:83] releasing machines lock for "embed-certs-313116", held for 9.074184047s
	I0807 19:28:23.937941  666607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-313116
	I0807 19:28:23.955860  666607 ssh_runner.go:195] Run: cat /version.json
	I0807 19:28:23.955934  666607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-313116
	I0807 19:28:23.956174  666607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 19:28:23.956236  666607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-313116
	I0807 19:28:23.977490  666607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/embed-certs-313116/id_rsa Username:docker}
	I0807 19:28:23.995101  666607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/embed-certs-313116/id_rsa Username:docker}
	I0807 19:28:24.076222  666607 ssh_runner.go:195] Run: systemctl --version
	I0807 19:28:24.208078  666607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0807 19:28:24.212751  666607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0807 19:28:24.243991  666607 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0807 19:28:24.244075  666607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 19:28:24.275873  666607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0807 19:28:24.275900  666607 start.go:495] detecting cgroup driver to use...
	I0807 19:28:24.275933  666607 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0807 19:28:24.275986  666607 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0807 19:28:24.289284  666607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 19:28:24.301788  666607 docker.go:217] disabling cri-docker service (if available) ...
	I0807 19:28:24.301870  666607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 19:28:24.316502  666607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 19:28:24.334478  666607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 19:28:24.435224  666607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 19:28:24.542708  666607 docker.go:233] disabling docker service ...
	I0807 19:28:24.542786  666607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 19:28:24.567514  666607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 19:28:24.582904  666607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 19:28:24.680771  666607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 19:28:24.776561  666607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 19:28:24.788870  666607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 19:28:24.806987  666607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0807 19:28:24.817418  666607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 19:28:24.827553  666607 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 19:28:24.827622  666607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 19:28:24.839767  666607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 19:28:24.852558  666607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 19:28:24.862548  666607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 19:28:24.873790  666607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 19:28:24.885117  666607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 19:28:24.895374  666607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 19:28:24.905346  666607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 19:28:24.915597  666607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 19:28:24.924243  666607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 19:28:24.932938  666607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:28:25.030401  666607 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 19:28:25.183145  666607 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0807 19:28:25.183272  666607 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0807 19:28:25.187292  666607 start.go:563] Will wait 60s for crictl version
	I0807 19:28:25.187403  666607 ssh_runner.go:195] Run: which crictl
	I0807 19:28:25.191077  666607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 19:28:25.230106  666607 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.19
	RuntimeApiVersion:  v1
	I0807 19:28:25.230201  666607 ssh_runner.go:195] Run: containerd --version
	I0807 19:28:25.255247  666607 ssh_runner.go:195] Run: containerd --version
	I0807 19:28:25.284446  666607 out.go:177] * Preparing Kubernetes v1.30.3 on containerd 1.7.19 ...
	I0807 19:28:26.013214  655176 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0807 19:28:26.025990  655176 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0807 19:28:26.028726  655176 out.go:177] 
	W0807 19:28:26.030644  655176 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0807 19:28:26.030678  655176 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0807 19:28:26.030700  655176 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0807 19:28:26.030706  655176 out.go:239] * 
	W0807 19:28:26.031891  655176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 19:28:26.034065  655176 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	a0d3b188042a6       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   9b89f727509e8       dashboard-metrics-scraper-8d5bb5db8-mx57w
	1f71dfd46d47d       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   d96170b80d335       storage-provisioner
	c969c3eda055d       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   45744a6302de2       kubernetes-dashboard-cd95d586-6swd7
	71b67ec384ee3       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   16016e9fc27c1       coredns-74ff55c5b-pnlz9
	bda5b4bb6267e       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   6d17037df5ec6       busybox
	c546073d53684       d5e283bc63d43       5 minutes ago       Running             kindnet-cni                 1                   e738ce3adb84c       kindnet-7lhnq
	c39d2dd3e3af4       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   d96170b80d335       storage-provisioner
	ef0324338dc1e       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   8eece693fcb5c       kube-proxy-nk57r
	6bf62d6889814       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   d3b06eafa0e6e       kube-scheduler-old-k8s-version-145103
	3eedb6e8f6840       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   18d41b22d82fa       etcd-old-k8s-version-145103
	feec356a13b99       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   80f2433114d30       kube-controller-manager-old-k8s-version-145103
	3bc865fe065ed       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   19e616d6e9e98       kube-apiserver-old-k8s-version-145103
	04110a0a3e934       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   8584d7acf4b2d       busybox
	6e6c60154928b       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   5721245e99bb1       coredns-74ff55c5b-pnlz9
	1d7137ab81f75       d5e283bc63d43       7 minutes ago       Exited              kindnet-cni                 0                   3b0213bb057ee       kindnet-7lhnq
	6fa7a0f941cdf       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   8d780c5fb717a       kube-proxy-nk57r
	b297faa73cadf       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   2c909693ccf3e       kube-apiserver-old-k8s-version-145103
	3e7229b91d012       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   89803b695a3da       kube-controller-manager-old-k8s-version-145103
	9528fcb65a1d0       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   b1dbce72ab8f0       kube-scheduler-old-k8s-version-145103
	9cc157f961887       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   c6b41ba7d8c24       etcd-old-k8s-version-145103
	
	
	==> containerd <==
	Aug 07 19:24:32 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:24:32.544795845Z" level=info msg="StartContainer for \"8775ff69c431613c71fc5ebcfcc016648cc1f43a2909f969823c88b03dd8e871\""
	Aug 07 19:24:32 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:24:32.611603486Z" level=info msg="StartContainer for \"8775ff69c431613c71fc5ebcfcc016648cc1f43a2909f969823c88b03dd8e871\" returns successfully"
	Aug 07 19:24:32 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:24:32.641460151Z" level=info msg="shim disconnected" id=8775ff69c431613c71fc5ebcfcc016648cc1f43a2909f969823c88b03dd8e871 namespace=k8s.io
	Aug 07 19:24:32 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:24:32.641522697Z" level=warning msg="cleaning up after shim disconnected" id=8775ff69c431613c71fc5ebcfcc016648cc1f43a2909f969823c88b03dd8e871 namespace=k8s.io
	Aug 07 19:24:32 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:24:32.641534143Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 07 19:24:33 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:24:33.305747272Z" level=info msg="RemoveContainer for \"3f36f28ca8af1c5092433b8a35bd4d5a2ac0e3d4c1ffaa7e9c60d2024b088bb5\""
	Aug 07 19:24:33 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:24:33.311863460Z" level=info msg="RemoveContainer for \"3f36f28ca8af1c5092433b8a35bd4d5a2ac0e3d4c1ffaa7e9c60d2024b088bb5\" returns successfully"
	Aug 07 19:25:39 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:25:39.505529799Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 07 19:25:39 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:25:39.511554222Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Aug 07 19:25:39 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:25:39.513305441Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 07 19:25:39 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:25:39.513373173Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Aug 07 19:26:03 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:26:03.507422146Z" level=info msg="CreateContainer within sandbox \"9b89f727509e86e52ca76b62d0523865cc9bad81c106cb780fd15cdd64ffe85e\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Aug 07 19:26:03 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:26:03.522837243Z" level=info msg="CreateContainer within sandbox \"9b89f727509e86e52ca76b62d0523865cc9bad81c106cb780fd15cdd64ffe85e\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"a0d3b188042a6406214358c9c2548c6939137c1e58de02480b418a532161210d\""
	Aug 07 19:26:03 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:26:03.523514349Z" level=info msg="StartContainer for \"a0d3b188042a6406214358c9c2548c6939137c1e58de02480b418a532161210d\""
	Aug 07 19:26:03 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:26:03.604306412Z" level=info msg="StartContainer for \"a0d3b188042a6406214358c9c2548c6939137c1e58de02480b418a532161210d\" returns successfully"
	Aug 07 19:26:03 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:26:03.638206543Z" level=info msg="shim disconnected" id=a0d3b188042a6406214358c9c2548c6939137c1e58de02480b418a532161210d namespace=k8s.io
	Aug 07 19:26:03 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:26:03.638264979Z" level=warning msg="cleaning up after shim disconnected" id=a0d3b188042a6406214358c9c2548c6939137c1e58de02480b418a532161210d namespace=k8s.io
	Aug 07 19:26:03 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:26:03.638275080Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 07 19:26:03 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:26:03.650618844Z" level=warning msg="cleanup warnings time=\"2024-08-07T19:26:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Aug 07 19:26:04 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:26:04.561489435Z" level=info msg="RemoveContainer for \"8775ff69c431613c71fc5ebcfcc016648cc1f43a2909f969823c88b03dd8e871\""
	Aug 07 19:26:04 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:26:04.567297961Z" level=info msg="RemoveContainer for \"8775ff69c431613c71fc5ebcfcc016648cc1f43a2909f969823c88b03dd8e871\" returns successfully"
	Aug 07 19:28:24 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:28:24.505530027Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 07 19:28:24 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:28:24.516589875Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Aug 07 19:28:24 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:28:24.518245235Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 07 19:28:24 old-k8s-version-145103 containerd[574]: time="2024-08-07T19:28:24.518361934Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [6e6c60154928b7200ebdc9a3e446964a9486060a56e068fc228fc1cf486ea9f2] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:39582 - 9120 "HINFO IN 440126047223048333.4191671708558080064. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013754729s
	
	
	==> coredns [71b67ec384ee3491a9aedfb1907bf632c692e24d5bd42dc00d79abb448997faa] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:46606 - 32180 "HINFO IN 4369302588702343646.4731416098990798416. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012252746s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-145103
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-145103
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=old-k8s-version-145103
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T19_20_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 19:20:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-145103
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:28:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 19:23:30 +0000   Wed, 07 Aug 2024 19:20:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 19:23:30 +0000   Wed, 07 Aug 2024 19:20:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 19:23:30 +0000   Wed, 07 Aug 2024 19:20:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 19:23:30 +0000   Wed, 07 Aug 2024 19:20:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-145103
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 31aefeb003c44817a420cf7bae1d3f3c
	  System UUID:                4483faed-3256-4b02-92c4-895442b3cd18
	  Boot ID:                    1ae5b520-001f-49c1-b434-c6991d6f5702
	  Kernel Version:             5.15.0-1066-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.19
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 coredns-74ff55c5b-pnlz9                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m2s
	  kube-system                 etcd-old-k8s-version-145103                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m9s
	  kube-system                 kindnet-7lhnq                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m2s
	  kube-system                 kube-apiserver-old-k8s-version-145103             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 kube-controller-manager-old-k8s-version-145103    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 kube-proxy-nk57r                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m2s
	  kube-system                 kube-scheduler-old-k8s-version-145103             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 metrics-server-9975d5f86-g5777                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m31s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-mx57w         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-6swd7               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m29s (x5 over 8m29s)  kubelet     Node old-k8s-version-145103 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m29s (x5 over 8m29s)  kubelet     Node old-k8s-version-145103 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m29s (x4 over 8m29s)  kubelet     Node old-k8s-version-145103 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m9s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m9s                   kubelet     Node old-k8s-version-145103 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m9s                   kubelet     Node old-k8s-version-145103 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m9s                   kubelet     Node old-k8s-version-145103 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             8m9s                   kubelet     Node old-k8s-version-145103 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  8m9s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m2s                   kubelet     Node old-k8s-version-145103 status is now: NodeReady
	  Normal  Starting                 8m1s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m2s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m2s (x8 over 6m2s)    kubelet     Node old-k8s-version-145103 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s (x8 over 6m2s)    kubelet     Node old-k8s-version-145103 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s (x7 over 6m2s)    kubelet     Node old-k8s-version-145103 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m2s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m46s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001023] FS-Cache: O-key=[8] '126fed0000000000'
	[  +0.000723] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000903] FS-Cache: N-cookie d=00000000821c87cc{9p.inode} n=000000000cb2d52f
	[  +0.000995] FS-Cache: N-key=[8] '126fed0000000000'
	[  +0.002883] FS-Cache: Duplicate cookie detected
	[  +0.000673] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000910] FS-Cache: O-cookie d=00000000821c87cc{9p.inode} n=000000009d8f7e93
	[  +0.001011] FS-Cache: O-key=[8] '126fed0000000000'
	[  +0.000739] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000898] FS-Cache: N-cookie d=00000000821c87cc{9p.inode} n=00000000db973e6d
	[  +0.001087] FS-Cache: N-key=[8] '126fed0000000000'
	[  +2.707133] FS-Cache: Duplicate cookie detected
	[  +0.000662] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000948] FS-Cache: O-cookie d=00000000821c87cc{9p.inode} n=0000000011c4343a
	[  +0.001044] FS-Cache: O-key=[8] '116fed0000000000'
	[  +0.000724] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000879] FS-Cache: N-cookie d=00000000821c87cc{9p.inode} n=000000005b80015d
	[  +0.000983] FS-Cache: N-key=[8] '116fed0000000000'
	[  +0.350590] FS-Cache: Duplicate cookie detected
	[  +0.000683] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000927] FS-Cache: O-cookie d=00000000821c87cc{9p.inode} n=00000000d0eb2a7b
	[  +0.001035] FS-Cache: O-key=[8] '176fed0000000000'
	[  +0.000735] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000946] FS-Cache: N-cookie d=00000000821c87cc{9p.inode} n=00000000442a717b
	[  +0.001037] FS-Cache: N-key=[8] '176fed0000000000'
	
	
	==> etcd [3eedb6e8f6840ac29e27007777d3751b9d3c3115d81b0c7922ea53ca5bdf40b0] <==
	2024-08-07 19:24:24.586911 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:24:34.586999 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:24:44.586830 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:24:54.586845 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:25:04.587023 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:25:14.586807 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:25:24.586958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:25:34.586878 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:25:44.586786 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:25:54.586752 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:26:04.586908 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:26:14.586870 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:26:24.586726 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:26:34.586871 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:26:44.586871 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:26:54.587773 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:27:04.586938 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:27:14.586865 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:27:24.586814 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:27:34.586698 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:27:44.586771 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:27:54.586951 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:28:04.587302 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:28:14.590347 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:28:24.586735 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [9cc157f961887e16a46adadfe9ca360b16c7c88fa3fb260387fefcf3cbfbbc3f] <==
	2024-08-07 19:20:00.379092 I | embed: listening for peers on 192.168.85.2:2380
	2024-08-07 19:20:00.379302 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/08/07 19:20:00 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/08/07 19:20:00 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/08/07 19:20:00 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/08/07 19:20:00 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/08/07 19:20:00 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-08-07 19:20:00.823082 I | etcdserver: setting up the initial cluster version to 3.4
	2024-08-07 19:20:00.823197 I | etcdserver: published {Name:old-k8s-version-145103 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-08-07 19:20:00.823245 I | embed: ready to serve client requests
	2024-08-07 19:20:00.825908 I | embed: serving client requests on 192.168.85.2:2379
	2024-08-07 19:20:00.829592 I | embed: ready to serve client requests
	2024-08-07 19:20:00.834190 I | embed: serving client requests on 127.0.0.1:2379
	2024-08-07 19:20:00.836599 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-08-07 19:20:00.836799 I | etcdserver/api: enabled capabilities for version 3.4
	2024-08-07 19:20:24.331345 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:20:27.150114 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:20:37.145315 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:20:47.145627 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:20:57.145658 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:21:07.145484 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:21:17.145478 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:21:27.145479 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:21:37.145581 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-07 19:21:47.145334 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 19:28:28 up  3:10,  0 users,  load average: 1.49, 2.27, 2.92
	Linux old-k8s-version-145103 5.15.0-1066-aws #72~20.04.1-Ubuntu SMP Sat Jul 20 07:44:07 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [1d7137ab81f756c634b0c7c58cd62f9d324463ad793954a60c9479bdca07c1d9] <==
	I0807 19:20:49.757641       1 main.go:299] handling current node
	W0807 19:20:59.214367       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0807 19:20:59.214402       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0807 19:20:59.758142       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0807 19:20:59.758182       1 main.go:299] handling current node
	W0807 19:21:04.960987       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 19:21:04.965751       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 19:21:06.751823       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0807 19:21:06.751860       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0807 19:21:09.757445       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0807 19:21:09.757485       1 main.go:299] handling current node
	I0807 19:21:19.757915       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0807 19:21:19.757952       1 main.go:299] handling current node
	W0807 19:21:28.051692       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0807 19:21:28.051727       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0807 19:21:29.757401       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0807 19:21:29.757435       1 main.go:299] handling current node
	W0807 19:21:35.935959       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0807 19:21:35.935997       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0807 19:21:39.758032       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0807 19:21:39.758069       1 main.go:299] handling current node
	W0807 19:21:43.290069       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 19:21:43.290107       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0807 19:21:49.758200       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0807 19:21:49.758239       1 main.go:299] handling current node
	
	
	==> kindnet [c546073d53684c34a050dfc7fc09a0893a42322936b7996cf13dc99f189689dc] <==
	I0807 19:27:12.873197       1 main.go:299] handling current node
	W0807 19:27:17.015715       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 19:27:17.015752       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0807 19:27:22.873519       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0807 19:27:22.873554       1 main.go:299] handling current node
	I0807 19:27:32.873367       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0807 19:27:32.873399       1 main.go:299] handling current node
	I0807 19:27:42.873528       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0807 19:27:42.873565       1 main.go:299] handling current node
	W0807 19:27:49.807826       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 19:27:49.807860       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 19:27:51.029391       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0807 19:27:51.029945       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0807 19:27:52.872751       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0807 19:27:52.872791       1 main.go:299] handling current node
	I0807 19:28:02.872984       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0807 19:28:02.873020       1 main.go:299] handling current node
	W0807 19:28:05.246305       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0807 19:28:05.246367       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0807 19:28:12.873102       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0807 19:28:12.873145       1 main.go:299] handling current node
	I0807 19:28:22.873526       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0807 19:28:22.873561       1 main.go:299] handling current node
	W0807 19:28:28.322389       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 19:28:28.322430       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	
	
	==> kube-apiserver [3bc865fe065ed3ad03b284fb01361f63156a204b7ae0b28683c17e870ac4fc4e] <==
	I0807 19:25:10.867429       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0807 19:25:10.867576       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0807 19:25:42.667370       1 client.go:360] parsed scheme: "passthrough"
	I0807 19:25:42.667426       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0807 19:25:42.667436       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0807 19:25:42.855487       1 handler_proxy.go:102] no RequestInfo found in the context
	E0807 19:25:42.855753       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0807 19:25:42.855835       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0807 19:26:16.144967       1 client.go:360] parsed scheme: "passthrough"
	I0807 19:26:16.145009       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0807 19:26:16.145017       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0807 19:27:00.042343       1 client.go:360] parsed scheme: "passthrough"
	I0807 19:27:00.042392       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0807 19:27:00.042402       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0807 19:27:31.379980       1 client.go:360] parsed scheme: "passthrough"
	I0807 19:27:31.380020       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0807 19:27:31.380029       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0807 19:27:39.871141       1 handler_proxy.go:102] no RequestInfo found in the context
	E0807 19:27:39.871318       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0807 19:27:39.871335       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0807 19:28:12.779358       1 client.go:360] parsed scheme: "passthrough"
	I0807 19:28:12.779401       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0807 19:28:12.779409       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [b297faa73cadffc51a549e10c40129c612c36462461c40cbdb9bc641d6ee9a07] <==
	I0807 19:20:07.541011       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0807 19:20:07.556131       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0807 19:20:07.562461       1 controller.go:606] quota admission added evaluator for: namespaces
	I0807 19:20:08.199574       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0807 19:20:08.199597       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0807 19:20:08.206630       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0807 19:20:08.211099       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0807 19:20:08.211129       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0807 19:20:08.750522       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0807 19:20:08.799061       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0807 19:20:08.919938       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0807 19:20:08.921266       1 controller.go:606] quota admission added evaluator for: endpoints
	I0807 19:20:08.926551       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0807 19:20:09.940512       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0807 19:20:10.612308       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0807 19:20:10.697500       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0807 19:20:19.106016       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0807 19:20:25.957127       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0807 19:20:26.085768       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0807 19:20:35.201450       1 client.go:360] parsed scheme: "passthrough"
	I0807 19:20:35.201497       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0807 19:20:35.201507       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0807 19:21:19.580165       1 client.go:360] parsed scheme: "passthrough"
	I0807 19:21:19.580227       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0807 19:21:19.580235       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [3e7229b91d01277de26c3bdbed648db1659a47ae7a01f17a25604535059e3b69] <==
	I0807 19:20:25.998675       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-145103" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0807 19:20:25.999876       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-145103" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0807 19:20:26.018925       1 shared_informer.go:247] Caches are synced for disruption 
	I0807 19:20:26.019186       1 disruption.go:339] Sending events to api server.
	I0807 19:20:26.019340       1 shared_informer.go:247] Caches are synced for deployment 
	I0807 19:20:26.020309       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0807 19:20:26.048924       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nk57r"
	I0807 19:20:26.049165       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7lhnq"
	I0807 19:20:26.155842       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0807 19:20:26.193199       1 shared_informer.go:247] Caches are synced for resource quota 
	I0807 19:20:26.223577       1 shared_informer.go:247] Caches are synced for resource quota 
	I0807 19:20:26.228052       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-tzktd"
	I0807 19:20:26.271790       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-pnlz9"
	I0807 19:20:26.293386       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0807 19:20:26.566572       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0807 19:20:26.566603       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0807 19:20:26.597494       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0807 19:20:27.714126       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0807 19:20:27.732201       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-tzktd"
	I0807 19:20:30.940826       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0807 19:21:56.098970       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0807 19:21:56.133747       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0807 19:21:56.154880       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0807 19:21:56.217002       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	E0807 19:21:56.221920       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [feec356a13b99aa000a5eb5efe3d70a5b72bd5a70b7158a40648afc4cb27eadf] <==
	W0807 19:24:05.501003       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0807 19:24:31.523826       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0807 19:24:37.151417       1 request.go:655] Throttling request took 1.048360949s, request: GET:https://192.168.85.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	W0807 19:24:38.002900       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0807 19:25:02.028602       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0807 19:25:09.653370       1 request.go:655] Throttling request took 1.048251355s, request: GET:https://192.168.85.2:8443/apis/events.k8s.io/v1?timeout=32s
	W0807 19:25:10.510123       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0807 19:25:32.531773       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0807 19:25:42.160699       1 request.go:655] Throttling request took 1.048141487s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0807 19:25:43.013772       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0807 19:26:03.033890       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0807 19:26:14.664420       1 request.go:655] Throttling request took 1.048271557s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0807 19:26:15.516089       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0807 19:26:33.535768       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0807 19:26:47.166443       1 request.go:655] Throttling request took 1.048440866s, request: GET:https://192.168.85.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0807 19:26:48.023176       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0807 19:27:04.037760       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0807 19:27:19.673662       1 request.go:655] Throttling request took 1.048400567s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0807 19:27:20.525094       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0807 19:27:34.539588       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0807 19:27:52.178289       1 request.go:655] Throttling request took 1.050734879s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0807 19:27:53.026986       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0807 19:28:05.041786       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0807 19:28:24.677552       1 request.go:655] Throttling request took 1.047758476s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0807 19:28:25.529302       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [6fa7a0f941cdfdb925109b190e19418627e84b3541aca1a841d45a2170aab263] <==
	I0807 19:20:27.232912       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0807 19:20:27.233048       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0807 19:20:27.270166       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0807 19:20:27.270253       1 server_others.go:185] Using iptables Proxier.
	I0807 19:20:27.270488       1 server.go:650] Version: v1.20.0
	I0807 19:20:27.270995       1 config.go:315] Starting service config controller
	I0807 19:20:27.271005       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0807 19:20:27.278103       1 config.go:224] Starting endpoint slice config controller
	I0807 19:20:27.278126       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0807 19:20:27.371136       1 shared_informer.go:247] Caches are synced for service config 
	I0807 19:20:27.380420       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [ef0324338dc1e165f57b48b6697cdfdd38e0a716c5db953f324a34d8b8b07a4a] <==
	I0807 19:22:42.359944       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0807 19:22:42.360063       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0807 19:22:42.395176       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0807 19:22:42.395272       1 server_others.go:185] Using iptables Proxier.
	I0807 19:22:42.395521       1 server.go:650] Version: v1.20.0
	I0807 19:22:42.396120       1 config.go:315] Starting service config controller
	I0807 19:22:42.396137       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0807 19:22:42.397798       1 config.go:224] Starting endpoint slice config controller
	I0807 19:22:42.397813       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0807 19:22:42.496271       1 shared_informer.go:247] Caches are synced for service config 
	I0807 19:22:42.497938       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [6bf62d68898142ab7b24d33e793c1c5ee47a83b90edd2641cba394a846742a58] <==
	I0807 19:22:34.093139       1 serving.go:331] Generated self-signed cert in-memory
	W0807 19:22:38.615870       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0807 19:22:38.615929       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 19:22:38.615940       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0807 19:22:38.615946       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0807 19:22:38.974753       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0807 19:22:38.980757       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0807 19:22:38.980776       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0807 19:22:38.980804       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0807 19:22:39.281072       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [9528fcb65a1d0f25a2c40a2d86715330fc6214aec182d07f3aaaf17856447d71] <==
	I0807 19:20:01.958167       1 serving.go:331] Generated self-signed cert in-memory
	W0807 19:20:07.381966       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0807 19:20:07.382003       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 19:20:07.382013       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0807 19:20:07.382019       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0807 19:20:07.466651       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0807 19:20:07.471708       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0807 19:20:07.471739       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0807 19:20:07.471760       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0807 19:20:07.482201       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0807 19:20:07.482383       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 19:20:07.482764       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0807 19:20:07.485586       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0807 19:20:07.485730       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0807 19:20:07.485853       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0807 19:20:07.485978       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0807 19:20:07.492131       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0807 19:20:07.492238       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0807 19:20:07.492394       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0807 19:20:07.492498       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0807 19:20:07.492629       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0807 19:20:08.332464       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 19:20:08.587670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0807 19:20:08.871869       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Aug 07 19:26:55 old-k8s-version-145103 kubelet[667]: E0807 19:26:55.505139     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 07 19:27:00 old-k8s-version-145103 kubelet[667]: I0807 19:27:00.509426     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: a0d3b188042a6406214358c9c2548c6939137c1e58de02480b418a532161210d
	Aug 07 19:27:00 old-k8s-version-145103 kubelet[667]: E0807 19:27:00.509785     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	Aug 07 19:27:10 old-k8s-version-145103 kubelet[667]: E0807 19:27:10.505231     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 07 19:27:15 old-k8s-version-145103 kubelet[667]: I0807 19:27:15.504461     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: a0d3b188042a6406214358c9c2548c6939137c1e58de02480b418a532161210d
	Aug 07 19:27:15 old-k8s-version-145103 kubelet[667]: E0807 19:27:15.504826     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	Aug 07 19:27:24 old-k8s-version-145103 kubelet[667]: E0807 19:27:24.505344     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 07 19:27:26 old-k8s-version-145103 kubelet[667]: I0807 19:27:26.510276     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: a0d3b188042a6406214358c9c2548c6939137c1e58de02480b418a532161210d
	Aug 07 19:27:26 old-k8s-version-145103 kubelet[667]: E0807 19:27:26.511078     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	Aug 07 19:27:35 old-k8s-version-145103 kubelet[667]: E0807 19:27:35.505097     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 07 19:27:39 old-k8s-version-145103 kubelet[667]: I0807 19:27:39.504527     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: a0d3b188042a6406214358c9c2548c6939137c1e58de02480b418a532161210d
	Aug 07 19:27:39 old-k8s-version-145103 kubelet[667]: E0807 19:27:39.505134     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	Aug 07 19:27:47 old-k8s-version-145103 kubelet[667]: E0807 19:27:47.505474     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 07 19:27:54 old-k8s-version-145103 kubelet[667]: I0807 19:27:54.507260     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: a0d3b188042a6406214358c9c2548c6939137c1e58de02480b418a532161210d
	Aug 07 19:27:54 old-k8s-version-145103 kubelet[667]: E0807 19:27:54.507623     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	Aug 07 19:28:00 old-k8s-version-145103 kubelet[667]: E0807 19:28:00.505524     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 07 19:28:07 old-k8s-version-145103 kubelet[667]: I0807 19:28:07.504427     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: a0d3b188042a6406214358c9c2548c6939137c1e58de02480b418a532161210d
	Aug 07 19:28:07 old-k8s-version-145103 kubelet[667]: E0807 19:28:07.504777     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	Aug 07 19:28:12 old-k8s-version-145103 kubelet[667]: E0807 19:28:12.505296     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 07 19:28:21 old-k8s-version-145103 kubelet[667]: I0807 19:28:21.504502     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: a0d3b188042a6406214358c9c2548c6939137c1e58de02480b418a532161210d
	Aug 07 19:28:21 old-k8s-version-145103 kubelet[667]: E0807 19:28:21.505410     667 pod_workers.go:191] Error syncing pod 33c5829f-ed2d-48df-8d3b-9d4927dc0083 ("dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-mx57w_kubernetes-dashboard(33c5829f-ed2d-48df-8d3b-9d4927dc0083)"
	Aug 07 19:28:24 old-k8s-version-145103 kubelet[667]: E0807 19:28:24.518653     667 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 07 19:28:24 old-k8s-version-145103 kubelet[667]: E0807 19:28:24.524879     667 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 07 19:28:24 old-k8s-version-145103 kubelet[667]: E0807 19:28:24.525172     667 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-zkf4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-g5777_kube-system(23d2813
a-d0e0-4efb-88b5-a90255e3e770): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 07 19:28:24 old-k8s-version-145103 kubelet[667]: E0807 19:28:24.525342     667 pod_workers.go:191] Error syncing pod 23d2813a-d0e0-4efb-88b5-a90255e3e770 ("metrics-server-9975d5f86-g5777_kube-system(23d2813a-d0e0-4efb-88b5-a90255e3e770)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> kubernetes-dashboard [c969c3eda055d4e537185b68a93ca50ae4c4f1bd8623727c4836ae5049aaa92f] <==
	2024/08/07 19:23:11 Using namespace: kubernetes-dashboard
	2024/08/07 19:23:11 Using in-cluster config to connect to apiserver
	2024/08/07 19:23:11 Using secret token for csrf signing
	2024/08/07 19:23:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/07 19:23:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/07 19:23:11 Successful initial request to the apiserver, version: v1.20.0
	2024/08/07 19:23:11 Generating JWE encryption key
	2024/08/07 19:23:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/07 19:23:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/07 19:23:12 Initializing JWE encryption key from synchronized object
	2024/08/07 19:23:12 Creating in-cluster Sidecar client
	2024/08/07 19:23:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/07 19:23:12 Serving insecurely on HTTP port: 9090
	2024/08/07 19:23:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/07 19:24:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/07 19:24:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/07 19:25:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/07 19:25:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/07 19:26:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/07 19:26:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/07 19:27:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/07 19:27:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/07 19:28:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/07 19:23:11 Starting overwatch
	
	
	==> storage-provisioner [1f71dfd46d47d861349904b991e501ae81c87e5f99aa08ea12edaf13977fd3ef] <==
	I0807 19:23:24.606791       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0807 19:23:24.619914       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0807 19:23:24.619965       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0807 19:23:42.203860       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0807 19:23:42.211800       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-145103_5cb902a6-f9e3-42b9-b09e-2f1c82b65631!
	I0807 19:23:42.211901       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5458d1d1-bb60-4650-8690-b47d33dad563", APIVersion:"v1", ResourceVersion:"844", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-145103_5cb902a6-f9e3-42b9-b09e-2f1c82b65631 became leader
	I0807 19:23:42.312001       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-145103_5cb902a6-f9e3-42b9-b09e-2f1c82b65631!
	
	
	==> storage-provisioner [c39d2dd3e3af4ce2f603cdcb5ffba311c3e583e21a32a79940c52420d20e73c2] <==
	I0807 19:22:42.162175       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0807 19:23:12.164796       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-145103 -n old-k8s-version-145103
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-145103 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-g5777
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-145103 describe pod metrics-server-9975d5f86-g5777
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-145103 describe pod metrics-server-9975d5f86-g5777: exit status 1 (119.446609ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-g5777" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-145103 describe pod metrics-server-9975d5f86-g5777: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (380.43s)

                                                
                                    

Test pass (303/336)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.66
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.30.3/json-events 7.09
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.07
18 TestDownloadOnly/v1.30.3/DeleteAll 0.21
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-rc.0/json-events 6.61
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.41
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.36
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.25
30 TestBinaryMirror 0.55
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 223.29
40 TestAddons/serial/GCPAuth/Namespaces 0.18
42 TestAddons/parallel/Registry 16.11
43 TestAddons/parallel/Ingress 18.57
44 TestAddons/parallel/InspektorGadget 12.01
45 TestAddons/parallel/MetricsServer 5.91
48 TestAddons/parallel/CSI 50.59
49 TestAddons/parallel/Headlamp 15.98
50 TestAddons/parallel/CloudSpanner 6.65
51 TestAddons/parallel/LocalPath 53.9
52 TestAddons/parallel/NvidiaDevicePlugin 5.68
53 TestAddons/parallel/Yakd 11.97
54 TestAddons/StoppedEnableDisable 12.27
55 TestCertOptions 34.12
56 TestCertExpiration 226.94
58 TestForceSystemdFlag 40.56
59 TestForceSystemdEnv 54.75
60 TestDockerEnvContainerd 48.33
65 TestErrorSpam/setup 34.34
66 TestErrorSpam/start 0.74
67 TestErrorSpam/status 1
68 TestErrorSpam/pause 1.7
69 TestErrorSpam/unpause 1.78
70 TestErrorSpam/stop 1.41
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 70.68
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 6.09
77 TestFunctional/serial/KubeContext 0.06
78 TestFunctional/serial/KubectlGetPods 0.09
81 TestFunctional/serial/CacheCmd/cache/add_remote 4.59
82 TestFunctional/serial/CacheCmd/cache/add_local 1.55
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
86 TestFunctional/serial/CacheCmd/cache/cache_reload 2.19
87 TestFunctional/serial/CacheCmd/cache/delete 0.11
88 TestFunctional/serial/MinikubeKubectlCmd 0.16
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
90 TestFunctional/serial/ExtraConfig 36.76
91 TestFunctional/serial/ComponentHealth 0.1
92 TestFunctional/serial/LogsCmd 1.63
93 TestFunctional/serial/LogsFileCmd 1.61
94 TestFunctional/serial/InvalidService 3.76
96 TestFunctional/parallel/ConfigCmd 0.48
97 TestFunctional/parallel/DashboardCmd 11.66
98 TestFunctional/parallel/DryRun 0.43
99 TestFunctional/parallel/InternationalLanguage 0.18
100 TestFunctional/parallel/StatusCmd 1.04
104 TestFunctional/parallel/ServiceCmdConnect 9.62
105 TestFunctional/parallel/AddonsCmd 0.14
106 TestFunctional/parallel/PersistentVolumeClaim 36.2
108 TestFunctional/parallel/SSHCmd 0.67
109 TestFunctional/parallel/CpCmd 2.41
111 TestFunctional/parallel/FileSync 0.35
112 TestFunctional/parallel/CertSync 2.1
116 TestFunctional/parallel/NodeLabels 0.11
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
120 TestFunctional/parallel/License 0.36
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 30.44
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
132 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
134 TestFunctional/parallel/ProfileCmd/profile_list 0.44
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
136 TestFunctional/parallel/MountCmd/any-port 7.28
137 TestFunctional/parallel/ServiceCmd/List 0.59
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
140 TestFunctional/parallel/ServiceCmd/Format 0.38
141 TestFunctional/parallel/ServiceCmd/URL 0.46
142 TestFunctional/parallel/MountCmd/specific-port 2.46
143 TestFunctional/parallel/MountCmd/VerifyCleanup 2.45
144 TestFunctional/parallel/Version/short 0.09
145 TestFunctional/parallel/Version/components 1.26
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
150 TestFunctional/parallel/ImageCommands/ImageBuild 3.22
151 TestFunctional/parallel/ImageCommands/Setup 0.83
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.8
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.28
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.82
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.83
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.87
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.78
162 TestFunctional/delete_echo-server_images 0.06
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestMultiControlPlane/serial/StartCluster 124.37
169 TestMultiControlPlane/serial/DeployApp 5.99
170 TestMultiControlPlane/serial/PingHostFromPods 1.69
171 TestMultiControlPlane/serial/AddWorkerNode 24.93
172 TestMultiControlPlane/serial/NodeLabels 0.13
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.79
174 TestMultiControlPlane/serial/CopyFile 19.93
175 TestMultiControlPlane/serial/StopSecondaryNode 12.93
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
177 TestMultiControlPlane/serial/RestartSecondaryNode 18.81
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.78
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 132.47
180 TestMultiControlPlane/serial/DeleteSecondaryNode 11.57
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.58
182 TestMultiControlPlane/serial/StopCluster 36.07
183 TestMultiControlPlane/serial/RestartCluster 64.77
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
185 TestMultiControlPlane/serial/AddSecondaryNode 42.47
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.8
190 TestJSONOutput/start/Command 61.67
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.75
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.7
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 5.73
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.23
215 TestKicCustomNetwork/create_custom_network 42.13
216 TestKicCustomNetwork/use_default_bridge_network 35.15
217 TestKicExistingNetwork 33.3
218 TestKicCustomSubnet 34.37
219 TestKicStaticIP 31.32
220 TestMainNoArgs 0.06
221 TestMinikubeProfile 70.84
224 TestMountStart/serial/StartWithMountFirst 6.2
225 TestMountStart/serial/VerifyMountFirst 0.26
226 TestMountStart/serial/StartWithMountSecond 6.32
227 TestMountStart/serial/VerifyMountSecond 0.25
228 TestMountStart/serial/DeleteFirst 1.6
229 TestMountStart/serial/VerifyMountPostDelete 0.26
230 TestMountStart/serial/Stop 1.2
231 TestMountStart/serial/RestartStopped 7.83
232 TestMountStart/serial/VerifyMountPostStop 0.25
235 TestMultiNode/serial/FreshStart2Nodes 90.4
236 TestMultiNode/serial/DeployApp2Nodes 17.18
237 TestMultiNode/serial/PingHostFrom2Pods 0.92
238 TestMultiNode/serial/AddNode 17.56
239 TestMultiNode/serial/MultiNodeLabels 0.09
240 TestMultiNode/serial/ProfileList 0.34
241 TestMultiNode/serial/CopyFile 10.25
242 TestMultiNode/serial/StopNode 2.24
243 TestMultiNode/serial/StartAfterStop 9.78
244 TestMultiNode/serial/RestartKeepsNodes 141.26
245 TestMultiNode/serial/DeleteNode 5.95
246 TestMultiNode/serial/StopMultiNode 24.06
247 TestMultiNode/serial/RestartMultiNode 52.12
248 TestMultiNode/serial/ValidateNameConflict 34.76
253 TestPreload 117.26
255 TestScheduledStopUnix 108.82
258 TestInsufficientStorage 10.62
259 TestRunningBinaryUpgrade 92.1
261 TestKubernetesUpgrade 362.42
262 TestMissingContainerUpgrade 176
264 TestPause/serial/Start 83.67
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
267 TestNoKubernetes/serial/StartWithK8s 43.93
268 TestNoKubernetes/serial/StartWithStopK8s 7.69
269 TestNoKubernetes/serial/Start 6.99
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
271 TestNoKubernetes/serial/ProfileList 1.03
272 TestNoKubernetes/serial/Stop 1.22
273 TestNoKubernetes/serial/StartNoArgs 6.86
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
275 TestPause/serial/SecondStartNoReconfiguration 8.29
276 TestPause/serial/Pause 0.95
277 TestPause/serial/VerifyStatus 0.43
278 TestPause/serial/Unpause 0.81
279 TestPause/serial/PauseAgain 1.18
280 TestPause/serial/DeletePaused 3.98
281 TestPause/serial/VerifyDeletedResources 0.19
282 TestStoppedBinaryUpgrade/Setup 1.14
283 TestStoppedBinaryUpgrade/Upgrade 128.63
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.26
299 TestNetworkPlugins/group/false 4.6
304 TestStartStop/group/old-k8s-version/serial/FirstStart 142.6
305 TestStartStop/group/old-k8s-version/serial/DeployApp 9.7
306 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.55
307 TestStartStop/group/old-k8s-version/serial/Stop 13.06
309 TestStartStop/group/no-preload/serial/FirstStart 67.31
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
312 TestStartStop/group/no-preload/serial/DeployApp 10.39
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
314 TestStartStop/group/no-preload/serial/Stop 12.09
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
316 TestStartStop/group/no-preload/serial/SecondStart 266.88
317 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
319 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
320 TestStartStop/group/no-preload/serial/Pause 3.16
322 TestStartStop/group/embed-certs/serial/FirstStart 71.42
323 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
325 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
326 TestStartStop/group/old-k8s-version/serial/Pause 4.06
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.1
329 TestStartStop/group/embed-certs/serial/DeployApp 7.4
330 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.28
331 TestStartStop/group/embed-certs/serial/Stop 12.31
332 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
333 TestStartStop/group/embed-certs/serial/SecondStart 267.66
334 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.43
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.25
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
338 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.43
339 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
340 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
341 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
342 TestStartStop/group/embed-certs/serial/Pause 3.27
344 TestStartStop/group/newest-cni/serial/FirstStart 42.93
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.44
349 TestNetworkPlugins/group/auto/Start 71.22
350 TestStartStop/group/newest-cni/serial/DeployApp 0
351 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.41
352 TestStartStop/group/newest-cni/serial/Stop 1.39
353 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
354 TestStartStop/group/newest-cni/serial/SecondStart 22.87
355 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
358 TestStartStop/group/newest-cni/serial/Pause 4.61
359 TestNetworkPlugins/group/kindnet/Start 70.97
360 TestNetworkPlugins/group/auto/KubeletFlags 0.42
361 TestNetworkPlugins/group/auto/NetCatPod 10.43
362 TestNetworkPlugins/group/auto/DNS 0.2
363 TestNetworkPlugins/group/auto/Localhost 0.15
364 TestNetworkPlugins/group/auto/HairPin 0.18
365 TestNetworkPlugins/group/calico/Start 84.85
366 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
367 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
368 TestNetworkPlugins/group/kindnet/NetCatPod 11.32
369 TestNetworkPlugins/group/kindnet/DNS 0.28
370 TestNetworkPlugins/group/kindnet/Localhost 0.23
371 TestNetworkPlugins/group/kindnet/HairPin 0.24
372 TestNetworkPlugins/group/custom-flannel/Start 63.57
373 TestNetworkPlugins/group/calico/ControllerPod 6.01
374 TestNetworkPlugins/group/calico/KubeletFlags 0.34
375 TestNetworkPlugins/group/calico/NetCatPod 12.31
376 TestNetworkPlugins/group/calico/DNS 0.23
377 TestNetworkPlugins/group/calico/Localhost 0.2
378 TestNetworkPlugins/group/calico/HairPin 0.19
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.46
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.4
381 TestNetworkPlugins/group/custom-flannel/DNS 0.23
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
384 TestNetworkPlugins/group/enable-default-cni/Start 97.11
385 TestNetworkPlugins/group/flannel/Start 63.11
386 TestNetworkPlugins/group/flannel/ControllerPod 6.01
387 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
388 TestNetworkPlugins/group/flannel/NetCatPod 9.25
389 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
390 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.32
391 TestNetworkPlugins/group/flannel/DNS 0.19
392 TestNetworkPlugins/group/flannel/Localhost 0.18
393 TestNetworkPlugins/group/flannel/HairPin 0.17
394 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
395 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
396 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
397 TestNetworkPlugins/group/bridge/Start 48.63
398 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
399 TestNetworkPlugins/group/bridge/NetCatPod 10.3
400 TestNetworkPlugins/group/bridge/DNS 0.18
401 TestNetworkPlugins/group/bridge/Localhost 0.14
402 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (9.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-777799 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-777799 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.654930514s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-777799
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-777799: exit status 85 (67.792669ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-777799 | jenkins | v1.33.1 | 07 Aug 24 18:29 UTC |          |
	|         | -p download-only-777799        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 18:29:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 18:29:55.091762  448493 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:29:55.091923  448493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:29:55.091933  448493 out.go:304] Setting ErrFile to fd 2...
	I0807 18:29:55.091940  448493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:29:55.092212  448493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
	W0807 18:29:55.092421  448493 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19389-443116/.minikube/config/config.json: open /home/jenkins/minikube-integration/19389-443116/.minikube/config/config.json: no such file or directory
	I0807 18:29:55.092895  448493 out.go:298] Setting JSON to true
	I0807 18:29:55.093923  448493 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7946,"bootTime":1723047449,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0807 18:29:55.094000  448493 start.go:139] virtualization:  
	I0807 18:29:55.096862  448493 out.go:97] [download-only-777799] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0807 18:29:55.097093  448493 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball: no such file or directory
	I0807 18:29:55.097174  448493 notify.go:220] Checking for updates...
	I0807 18:29:55.098845  448493 out.go:169] MINIKUBE_LOCATION=19389
	I0807 18:29:55.101015  448493 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:29:55.102916  448493 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	I0807 18:29:55.104853  448493 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	I0807 18:29:55.106432  448493 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0807 18:29:55.109875  448493 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0807 18:29:55.110252  448493 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 18:29:55.135976  448493 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0807 18:29:55.136087  448493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 18:29:55.201682  448493 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-07 18:29:55.192075822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 18:29:55.201796  448493 docker.go:307] overlay module found
	I0807 18:29:55.203573  448493 out.go:97] Using the docker driver based on user configuration
	I0807 18:29:55.203599  448493 start.go:297] selected driver: docker
	I0807 18:29:55.203606  448493 start.go:901] validating driver "docker" against <nil>
	I0807 18:29:55.203709  448493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 18:29:55.266484  448493 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-07 18:29:55.257008029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 18:29:55.266674  448493 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 18:29:55.266973  448493 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0807 18:29:55.267132  448493 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 18:29:55.268978  448493 out.go:169] Using Docker driver with root privileges
	I0807 18:29:55.270384  448493 cni.go:84] Creating CNI manager for ""
	I0807 18:29:55.270403  448493 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0807 18:29:55.270414  448493 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0807 18:29:55.270512  448493 start.go:340] cluster config:
	{Name:download-only-777799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-777799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:29:55.272208  448493 out.go:97] Starting "download-only-777799" primary control-plane node in "download-only-777799" cluster
	I0807 18:29:55.272237  448493 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0807 18:29:55.274016  448493 out.go:97] Pulling base image v0.0.44-1723026928-19389 ...
	I0807 18:29:55.274047  448493 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0807 18:29:55.274152  448493 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local docker daemon
	I0807 18:29:55.289273  448493 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 to local cache
	I0807 18:29:55.289451  448493 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local cache directory
	I0807 18:29:55.289562  448493 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 to local cache
	I0807 18:29:55.356403  448493 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0807 18:29:55.356427  448493 cache.go:56] Caching tarball of preloaded images
	I0807 18:29:55.356582  448493 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0807 18:29:55.358854  448493 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0807 18:29:55.358879  448493 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0807 18:29:55.466937  448493 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-777799 host does not exist
	  To start a cluster, run: "minikube start -p download-only-777799"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-777799
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (7.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-547887 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-547887 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.085316908s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (7.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-547887
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-547887: exit status 85 (68.95478ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-777799 | jenkins | v1.33.1 | 07 Aug 24 18:29 UTC |                     |
	|         | -p download-only-777799        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| delete  | -p download-only-777799        | download-only-777799 | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| start   | -o=json --download-only        | download-only-547887 | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC |                     |
	|         | -p download-only-547887        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 18:30:05
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 18:30:05.172487  448695 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:30:05.172623  448695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:30:05.172633  448695 out.go:304] Setting ErrFile to fd 2...
	I0807 18:30:05.172639  448695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:30:05.172933  448695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
	I0807 18:30:05.173399  448695 out.go:298] Setting JSON to true
	I0807 18:30:05.174330  448695 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7957,"bootTime":1723047449,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0807 18:30:05.174407  448695 start.go:139] virtualization:  
	I0807 18:30:05.176942  448695 out.go:97] [download-only-547887] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0807 18:30:05.177161  448695 notify.go:220] Checking for updates...
	I0807 18:30:05.179273  448695 out.go:169] MINIKUBE_LOCATION=19389
	I0807 18:30:05.181474  448695 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:30:05.183474  448695 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	I0807 18:30:05.185869  448695 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	I0807 18:30:05.187747  448695 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0807 18:30:05.191246  448695 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0807 18:30:05.191547  448695 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 18:30:05.214054  448695 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0807 18:30:05.214173  448695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 18:30:05.298439  448695 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-07 18:30:05.286454652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 18:30:05.298554  448695 docker.go:307] overlay module found
	I0807 18:30:05.300416  448695 out.go:97] Using the docker driver based on user configuration
	I0807 18:30:05.300447  448695 start.go:297] selected driver: docker
	I0807 18:30:05.300455  448695 start.go:901] validating driver "docker" against <nil>
	I0807 18:30:05.300578  448695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 18:30:05.366468  448695 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-07 18:30:05.356153522 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 18:30:05.366644  448695 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 18:30:05.366924  448695 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0807 18:30:05.367086  448695 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 18:30:05.369401  448695 out.go:169] Using Docker driver with root privileges
	I0807 18:30:05.371350  448695 cni.go:84] Creating CNI manager for ""
	I0807 18:30:05.371374  448695 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0807 18:30:05.371388  448695 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0807 18:30:05.371486  448695 start.go:340] cluster config:
	{Name:download-only-547887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-547887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:30:05.373717  448695 out.go:97] Starting "download-only-547887" primary control-plane node in "download-only-547887" cluster
	I0807 18:30:05.373750  448695 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0807 18:30:05.375809  448695 out.go:97] Pulling base image v0.0.44-1723026928-19389 ...
	I0807 18:30:05.375839  448695 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0807 18:30:05.376006  448695 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local docker daemon
	I0807 18:30:05.391360  448695 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 to local cache
	I0807 18:30:05.391465  448695 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local cache directory
	I0807 18:30:05.391489  448695 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local cache directory, skipping pull
	I0807 18:30:05.391499  448695 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 exists in cache, skipping pull
	I0807 18:30:05.391506  448695 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 as a tarball
	I0807 18:30:05.468951  448695 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	I0807 18:30:05.468975  448695 cache.go:56] Caching tarball of preloaded images
	I0807 18:30:05.469138  448695 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0807 18:30:05.471196  448695 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0807 18:30:05.471228  448695 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4 ...
	I0807 18:30:05.588606  448695 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:2969442dcdf6412905c6484ccc8dd1ed -> /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-547887 host does not exist
	  To start a cluster, run: "minikube start -p download-only-547887"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-547887
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (6.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-156624 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-156624 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.614377163s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (6.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-156624
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-156624: exit status 85 (409.648042ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-777799 | jenkins | v1.33.1 | 07 Aug 24 18:29 UTC |                     |
	|         | -p download-only-777799           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| delete  | -p download-only-777799           | download-only-777799 | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| start   | -o=json --download-only           | download-only-547887 | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC |                     |
	|         | -p download-only-547887           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| delete  | -p download-only-547887           | download-only-547887 | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:30 UTC |
	| start   | -o=json --download-only           | download-only-156624 | jenkins | v1.33.1 | 07 Aug 24 18:30 UTC |                     |
	|         | -p download-only-156624           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 18:30:12
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 18:30:12.654693  448898 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:30:12.654826  448898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:30:12.654837  448898 out.go:304] Setting ErrFile to fd 2...
	I0807 18:30:12.654842  448898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:30:12.655092  448898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
	I0807 18:30:12.655476  448898 out.go:298] Setting JSON to true
	I0807 18:30:12.656377  448898 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7964,"bootTime":1723047449,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0807 18:30:12.656450  448898 start.go:139] virtualization:  
	I0807 18:30:12.658563  448898 out.go:97] [download-only-156624] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0807 18:30:12.658873  448898 notify.go:220] Checking for updates...
	I0807 18:30:12.661014  448898 out.go:169] MINIKUBE_LOCATION=19389
	I0807 18:30:12.663134  448898 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:30:12.664668  448898 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	I0807 18:30:12.666275  448898 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	I0807 18:30:12.668042  448898 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0807 18:30:12.671496  448898 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0807 18:30:12.671830  448898 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 18:30:12.692669  448898 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0807 18:30:12.692787  448898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 18:30:12.755518  448898 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-07 18:30:12.746365463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 18:30:12.755630  448898 docker.go:307] overlay module found
	I0807 18:30:12.757484  448898 out.go:97] Using the docker driver based on user configuration
	I0807 18:30:12.757515  448898 start.go:297] selected driver: docker
	I0807 18:30:12.757523  448898 start.go:901] validating driver "docker" against <nil>
	I0807 18:30:12.757639  448898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 18:30:12.817797  448898 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-07 18:30:12.807429476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 18:30:12.818020  448898 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 18:30:12.818356  448898 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0807 18:30:12.818562  448898 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 18:30:12.820600  448898 out.go:169] Using Docker driver with root privileges
	I0807 18:30:12.822120  448898 cni.go:84] Creating CNI manager for ""
	I0807 18:30:12.822144  448898 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0807 18:30:12.822164  448898 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0807 18:30:12.822268  448898 start.go:340] cluster config:
	{Name:download-only-156624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-156624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0807 18:30:12.824198  448898 out.go:97] Starting "download-only-156624" primary control-plane node in "download-only-156624" cluster
	I0807 18:30:12.824232  448898 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0807 18:30:12.825880  448898 out.go:97] Pulling base image v0.0.44-1723026928-19389 ...
	I0807 18:30:12.825935  448898 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime containerd
	I0807 18:30:12.826010  448898 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local docker daemon
	I0807 18:30:12.840993  448898 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 to local cache
	I0807 18:30:12.841145  448898 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local cache directory
	I0807 18:30:12.841168  448898 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 in local cache directory, skipping pull
	I0807 18:30:12.841179  448898 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 exists in cache, skipping pull
	I0807 18:30:12.841187  448898 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 as a tarball
	I0807 18:30:12.899549  448898 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-containerd-overlay2-arm64.tar.lz4
	I0807 18:30:12.899582  448898 cache.go:56] Caching tarball of preloaded images
	I0807 18:30:12.899755  448898 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime containerd
	I0807 18:30:12.901632  448898 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0807 18:30:12.901652  448898 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-containerd-overlay2-arm64.tar.lz4 ...
	I0807 18:30:13.008615  448898 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:9f4f64d897eefd701781dd1aad6e4f21 -> /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-containerd-overlay2-arm64.tar.lz4
	I0807 18:30:17.550778  448898 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-containerd-overlay2-arm64.tar.lz4 ...
	I0807 18:30:17.550895  448898 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19389-443116/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-156624 host does not exist
	  To start a cluster, run: "minikube start -p download-only-156624"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-156624
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-181151 --alsologtostderr --binary-mirror http://127.0.0.1:40631 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-181151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-181151
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-553671
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-553671: exit status 85 (72.66343ms)

                                                
                                                
-- stdout --
	* Profile "addons-553671" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-553671"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-553671
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-553671: exit status 85 (67.679253ms)

                                                
                                                
-- stdout --
	* Profile "addons-553671" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-553671"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (223.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-553671 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-553671 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m43.287414124s)
--- PASS: TestAddons/Setup (223.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-553671 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-553671 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 8.484848ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-4rlp5" [1c71360f-b606-4ccd-a70a-f81190028951] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.008510001s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-t8rmr" [0b3865c3-bbbc-4aa1-9b36-1f77fd0af331] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005108819s
addons_test.go:342: (dbg) Run:  kubectl --context addons-553671 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-553671 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-553671 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.087033825s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-553671 ip
2024/08/07 18:38:00 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-553671 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-553671 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-553671 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-553671 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [221993a6-0d30-44e8-a7b7-1582b9c4bb70] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [221993a6-0d30-44e8-a7b7-1582b9c4bb70] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003830775s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-553671 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-553671 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-553671 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-553671 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-553671 addons disable ingress-dns --alsologtostderr -v=1: (1.123373901s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-553671 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-553671 addons disable ingress --alsologtostderr -v=1: (7.826682049s)
--- PASS: TestAddons/parallel/Ingress (18.57s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jvc5n" [f28419c4-a615-4cf7-bde0-5068511a7e9d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004555521s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-553671
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-553671: (6.000152044s)
--- PASS: TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.91s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.109031ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-tkfkt" [13342b20-1c93-45d1-8e45-d50e8aeec659] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.021678733s
addons_test.go:417: (dbg) Run:  kubectl --context addons-553671 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-553671 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.525806ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-553671 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-553671 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4427ced6-8c4c-449d-89fb-7724bc11a6ae] Pending
helpers_test.go:344: "task-pv-pod" [4427ced6-8c4c-449d-89fb-7724bc11a6ae] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4427ced6-8c4c-449d-89fb-7724bc11a6ae] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003254789s
addons_test.go:590: (dbg) Run:  kubectl --context addons-553671 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-553671 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-553671 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-553671 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-553671 delete pod task-pv-pod: (1.199378527s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-553671 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-553671 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-553671 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5f86228e-e6b9-40c3-8df5-68cdd4bbdbcc] Pending
helpers_test.go:344: "task-pv-pod-restore" [5f86228e-e6b9-40c3-8df5-68cdd4bbdbcc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5f86228e-e6b9-40c3-8df5-68cdd4bbdbcc] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003426421s
addons_test.go:632: (dbg) Run:  kubectl --context addons-553671 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-553671 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-553671 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-553671 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-553671 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.838545656s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-553671 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-553671 addons disable volumesnapshots --alsologtostderr -v=1: (1.384588888s)
--- PASS: TestAddons/parallel/CSI (50.59s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-553671 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-553671 --alsologtostderr -v=1: (1.152517724s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-6dmt2" [4e4269f5-d3f9-42b3-a7ae-a0372d52bf14] Pending
helpers_test.go:344: "headlamp-9d868696f-6dmt2" [4e4269f5-d3f9-42b3-a7ae-a0372d52bf14] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-6dmt2" [4e4269f5-d3f9-42b3-a7ae-a0372d52bf14] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004444045s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-553671 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-553671 addons disable headlamp --alsologtostderr -v=1: (5.817909619s)
--- PASS: TestAddons/parallel/Headlamp (15.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-tzgsj" [458f036b-12ad-4bca-a19f-23e765981144] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003561175s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-553671
--- PASS: TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.9s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-553671 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-553671 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553671 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7e0bb0fa-9982-45ee-94e7-7422a068aab2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7e0bb0fa-9982-45ee-94e7-7422a068aab2] Running
helpers_test.go:344: "test-local-path" [7e0bb0fa-9982-45ee-94e7-7422a068aab2] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7e0bb0fa-9982-45ee-94e7-7422a068aab2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004344942s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-553671 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-553671 ssh "cat /opt/local-path-provisioner/pvc-77234963-1530-473e-a341-3e4584f36aa5_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-553671 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-553671 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-553671 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-553671 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.601096341s)
--- PASS: TestAddons/parallel/LocalPath (53.90s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xf5g4" [ee3238cf-e734-4685-8582-6e77f90d5f77] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005721987s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-553671
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.68s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-vss4r" [64faf22c-9f5a-4139-8f7f-56d2596c0c9b] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003370321s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-553671 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-553671 addons disable yakd --alsologtostderr -v=1: (5.961676855s)
--- PASS: TestAddons/parallel/Yakd (11.97s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-553671
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-553671: (12.000563138s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-553671
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-553671
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-553671
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestCertOptions (34.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-890209 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0807 19:19:05.258892  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-890209 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (31.370899159s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-890209 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-890209 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-890209 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-890209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-890209
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-890209: (2.077424696s)
--- PASS: TestCertOptions (34.12s)

                                                
                                    
x
+
TestCertExpiration (226.94s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-863658 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0807 19:18:34.879798  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-863658 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (36.429003178s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-863658 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-863658 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.712945725s)
helpers_test.go:175: Cleaning up "cert-expiration-863658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-863658
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-863658: (2.794429415s)
--- PASS: TestCertExpiration (226.94s)

                                                
                                    
x
+
TestForceSystemdFlag (40.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-727876 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-727876 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.908210692s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-727876 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-727876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-727876
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-727876: (2.290562784s)
--- PASS: TestForceSystemdFlag (40.56s)

                                                
                                    
x
+
TestForceSystemdEnv (54.75s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-380157 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-380157 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (52.215574154s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-380157 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-380157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-380157
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-380157: (2.149585134s)
--- PASS: TestForceSystemdEnv (54.75s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.33s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-976760 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-976760 --driver=docker  --container-runtime=containerd: (31.9153625s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-976760"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-976760": (1.096749608s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4pbwyJwCv4jZ/agent.467572" SSH_AGENT_PID="467573" DOCKER_HOST=ssh://docker@127.0.0.1:33168 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4pbwyJwCv4jZ/agent.467572" SSH_AGENT_PID="467573" DOCKER_HOST=ssh://docker@127.0.0.1:33168 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4pbwyJwCv4jZ/agent.467572" SSH_AGENT_PID="467573" DOCKER_HOST=ssh://docker@127.0.0.1:33168 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.435911622s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4pbwyJwCv4jZ/agent.467572" SSH_AGENT_PID="467573" DOCKER_HOST=ssh://docker@127.0.0.1:33168 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4pbwyJwCv4jZ/agent.467572" SSH_AGENT_PID="467573" DOCKER_HOST=ssh://docker@127.0.0.1:33168 docker image ls": (1.297617472s)
helpers_test.go:175: Cleaning up "dockerenv-976760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-976760
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-976760: (2.069679405s)
--- PASS: TestDockerEnvContainerd (48.33s)

                                                
                                    
x
+
TestErrorSpam/setup (34.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-560180 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-560180 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-560180 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-560180 --driver=docker  --container-runtime=containerd: (34.336271666s)
--- PASS: TestErrorSpam/setup (34.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 stop: (1.231306787s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-560180 --log_dir /tmp/nospam-560180 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19389-443116/.minikube/files/etc/test/nested/copy/448488/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-022013 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-022013 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m10.676995139s)
--- PASS: TestFunctional/serial/StartWithProxy (70.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.09s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-022013 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-022013 --alsologtostderr -v=8: (6.086192537s)
functional_test.go:659: soft start took 6.09049091s for "functional-022013" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.09s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-022013 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-022013 cache add registry.k8s.io/pause:3.1: (1.575885466s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-022013 cache add registry.k8s.io/pause:3.3: (1.63512251s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-022013 cache add registry.k8s.io/pause:latest: (1.378439368s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-022013 /tmp/TestFunctionalserialCacheCmdcacheadd_local3249729557/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 cache add minikube-local-cache-test:functional-022013
functional_test.go:1085: (dbg) Done: out/minikube-linux-arm64 -p functional-022013 cache add minikube-local-cache-test:functional-022013: (1.043949583s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 cache delete minikube-local-cache-test:functional-022013
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-022013
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-022013 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (303.868043ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-022013 cache reload: (1.267241556s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 kubectl -- --context functional-022013 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-022013 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.76s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-022013 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-022013 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.757924935s)
functional_test.go:757: restart took 36.758056763s for "functional-022013" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.76s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-022013 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-022013 logs: (1.629933953s)
--- PASS: TestFunctional/serial/LogsCmd (1.63s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 logs --file /tmp/TestFunctionalserialLogsFileCmd345234246/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-022013 logs --file /tmp/TestFunctionalserialLogsFileCmd345234246/001/logs.txt: (1.606136051s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.76s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-022013 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-022013
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-022013: exit status 115 (397.547809ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31248 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-022013 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-022013 config get cpus: exit status 14 (112.780144ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-022013 config get cpus: exit status 14 (65.442433ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-022013 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-022013 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 482968: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-022013 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-022013 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (187.354758ms)

                                                
                                                
-- stdout --
	* [functional-022013] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:44:21.497947  482447 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:44:21.498150  482447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:44:21.498178  482447 out.go:304] Setting ErrFile to fd 2...
	I0807 18:44:21.498196  482447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:44:21.498464  482447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
	I0807 18:44:21.498851  482447 out.go:298] Setting JSON to false
	I0807 18:44:21.499841  482447 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8813,"bootTime":1723047449,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0807 18:44:21.499935  482447 start.go:139] virtualization:  
	I0807 18:44:21.502239  482447 out.go:177] * [functional-022013] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0807 18:44:21.504653  482447 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:44:21.504714  482447 notify.go:220] Checking for updates...
	I0807 18:44:21.508478  482447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:44:21.510173  482447 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	I0807 18:44:21.511908  482447 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	I0807 18:44:21.513795  482447 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0807 18:44:21.515716  482447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:44:21.517760  482447 config.go:182] Loaded profile config "functional-022013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0807 18:44:21.518261  482447 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 18:44:21.541102  482447 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0807 18:44:21.541228  482447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 18:44:21.626927  482447 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-07 18:44:21.617045635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 18:44:21.627047  482447 docker.go:307] overlay module found
	I0807 18:44:21.629038  482447 out.go:177] * Using the docker driver based on existing profile
	I0807 18:44:21.630633  482447 start.go:297] selected driver: docker
	I0807 18:44:21.630653  482447 start.go:901] validating driver "docker" against &{Name:functional-022013 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-022013 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:44:21.630802  482447 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 18:44:21.632901  482447 out.go:177] 
	W0807 18:44:21.634597  482447 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0807 18:44:21.636541  482447 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-022013 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-022013 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-022013 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (180.647782ms)

                                                
                                                
-- stdout --
	* [functional-022013] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:44:21.323666  482401 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:44:21.323816  482401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:44:21.323827  482401 out.go:304] Setting ErrFile to fd 2...
	I0807 18:44:21.323832  482401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:44:21.324274  482401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
	I0807 18:44:21.324680  482401 out.go:298] Setting JSON to false
	I0807 18:44:21.325720  482401 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8813,"bootTime":1723047449,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0807 18:44:21.325795  482401 start.go:139] virtualization:  
	I0807 18:44:21.328393  482401 out.go:177] * [functional-022013] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0807 18:44:21.330557  482401 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:44:21.330641  482401 notify.go:220] Checking for updates...
	I0807 18:44:21.334629  482401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:44:21.336459  482401 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	I0807 18:44:21.338400  482401 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	I0807 18:44:21.340727  482401 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0807 18:44:21.342943  482401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:44:21.345474  482401 config.go:182] Loaded profile config "functional-022013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0807 18:44:21.345993  482401 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 18:44:21.370553  482401 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0807 18:44:21.370735  482401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 18:44:21.438193  482401 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-07 18:44:21.427348819 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 18:44:21.438301  482401 docker.go:307] overlay module found
	I0807 18:44:21.440431  482401 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0807 18:44:21.442243  482401 start.go:297] selected driver: docker
	I0807 18:44:21.442261  482401 start.go:901] validating driver "docker" against &{Name:functional-022013 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-022013 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:44:21.442380  482401 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 18:44:21.444812  482401 out.go:177] 
	W0807 18:44:21.446454  482401 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0807 18:44:21.448482  482401 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-022013 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-022013 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-sqq6n" [a2e2b843-b234-44ad-9609-28bf34403162] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0807 18:44:05.900391  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
helpers_test.go:344: "hello-node-connect-6f49f58cd5-sqq6n" [a2e2b843-b234-44ad-9609-28bf34403162] Running
E0807 18:44:10.382303  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004024818s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31760
functional_test.go:1671: http://192.168.49.2:31760: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-sqq6n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31760
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 addons list
E0807 18:44:05.579721  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0db41f4a-c976-43c2-8807-200684523fce] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003901468s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-022013 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-022013 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-022013 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-022013 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-022013 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-022013 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-022013 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f2307b8e-3736-49e5-a26f-f52dc5666c6e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:344: "sp-pod" [f2307b8e-3736-49e5-a26f-f52dc5666c6e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f2307b8e-3736-49e5-a26f-f52dc5666c6e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004438597s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-022013 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-022013 delete -f testdata/storage-provisioner/pod.yaml
E0807 18:44:05.260717  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
E0807 18:44:05.267075  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
E0807 18:44:05.277374  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
E0807 18:44:05.297947  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-022013 delete -f testdata/storage-provisioner/pod.yaml: (1.381661953s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-022013 apply -f testdata/storage-provisioner/pod.yaml
E0807 18:44:05.338806  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
E0807 18:44:05.418893  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f172ee9e-9e45-4808-b39f-fbfaedbb2cbb] Pending
helpers_test.go:344: "sp-pod" [f172ee9e-9e45-4808-b39f-fbfaedbb2cbb] Running
E0807 18:44:06.541047  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
E0807 18:44:07.821550  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004134213s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-022013 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.20s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh -n functional-022013 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 cp functional-022013:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3926225238/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh -n functional-022013 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh -n functional-022013 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/448488/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "sudo cat /etc/test/nested/copy/448488/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/448488.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "sudo cat /etc/ssl/certs/448488.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/448488.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "sudo cat /usr/share/ca-certificates/448488.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/4484882.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "sudo cat /etc/ssl/certs/4484882.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/4484882.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "sudo cat /usr/share/ca-certificates/4484882.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-022013 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-022013 ssh "sudo systemctl is-active docker": exit status 1 (324.192956ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-022013 ssh "sudo systemctl is-active crio": exit status 1 (341.865963ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-022013 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-022013 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-022013 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-022013 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 479795: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-022013 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (30.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-022013 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [50bbf69d-3115-468b-a7d0-7c03cd725eba] Pending
helpers_test.go:344: "nginx-svc" [50bbf69d-3115-468b-a7d0-7c03cd725eba] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:344: "nginx-svc" [50bbf69d-3115-468b-a7d0-7c03cd725eba] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [50bbf69d-3115-468b-a7d0-7c03cd725eba] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 30.004422578s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (30.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-022013 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.202.84 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-022013 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-022013 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-022013 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-7p55z" [351b5413-c3e1-4741-bac5-59326d6c762d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-7p55z" [351b5413-c3e1-4741-bac5-59326d6c762d] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.005243287s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0807 18:44:15.503360  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "382.039601ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "52.90627ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "328.992432ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "49.40527ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-022013 /tmp/TestFunctionalparallelMountCmdany-port1971611874/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723056256505597505" to /tmp/TestFunctionalparallelMountCmdany-port1971611874/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723056256505597505" to /tmp/TestFunctionalparallelMountCmdany-port1971611874/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723056256505597505" to /tmp/TestFunctionalparallelMountCmdany-port1971611874/001/test-1723056256505597505
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-022013 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (337.337616ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  7 18:44 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  7 18:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  7 18:44 test-1723056256505597505
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh cat /mount-9p/test-1723056256505597505
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-022013 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fc59695e-2630-477b-aaa7-803cbcf7ea51] Pending
helpers_test.go:344: "busybox-mount" [fc59695e-2630-477b-aaa7-803cbcf7ea51] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fc59695e-2630-477b-aaa7-803cbcf7ea51] Running
helpers_test.go:344: "busybox-mount" [fc59695e-2630-477b-aaa7-803cbcf7ea51] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fc59695e-2630-477b-aaa7-803cbcf7ea51] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004710574s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-022013 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-022013 /tmp/TestFunctionalparallelMountCmdany-port1971611874/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 service list -o json
functional_test.go:1490: Took "539.392636ms" to run "out/minikube-linux-arm64 -p functional-022013 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31757
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31757
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-022013 /tmp/TestFunctionalparallelMountCmdspecific-port1152073911/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-022013 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (648.937774ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-022013 /tmp/TestFunctionalparallelMountCmdspecific-port1152073911/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
E0807 18:44:25.743500  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-022013 ssh "sudo umount -f /mount-9p": exit status 1 (290.470438ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-022013 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-022013 /tmp/TestFunctionalparallelMountCmdspecific-port1152073911/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-022013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3878955492/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-022013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3878955492/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-022013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3878955492/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-022013 ssh "findmnt -T" /mount1: exit status 1 (739.154553ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-022013 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-022013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3878955492/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-022013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3878955492/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-022013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3878955492/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-022013 version -o=json --components: (1.261153048s)
--- PASS: TestFunctional/parallel/Version/components (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-022013 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-022013
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-022013
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-022013 image ls --format short --alsologtostderr:
I0807 18:44:38.647960  485199 out.go:291] Setting OutFile to fd 1 ...
I0807 18:44:38.648178  485199 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:44:38.648189  485199 out.go:304] Setting ErrFile to fd 2...
I0807 18:44:38.648194  485199 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:44:38.648449  485199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
I0807 18:44:38.649076  485199 config.go:182] Loaded profile config "functional-022013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0807 18:44:38.649228  485199 config.go:182] Loaded profile config "functional-022013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0807 18:44:38.649712  485199 cli_runner.go:164] Run: docker container inspect functional-022013 --format={{.State.Status}}
I0807 18:44:38.669032  485199 ssh_runner.go:195] Run: systemctl --version
I0807 18:44:38.669088  485199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-022013
I0807 18:44:38.687923  485199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/functional-022013/id_rsa Username:docker}
I0807 18:44:38.784593  485199 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-022013 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20240715-585640e9 | sha256:5e3296 | 33.3MB |
| docker.io/library/minikube-local-cache-test | functional-022013  | sha256:ef49b2 | 992B   |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/nginx                     | alpine             | sha256:d7cd33 | 18.3MB |
| docker.io/library/nginx                     | latest             | sha256:43b17f | 67.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3            | sha256:8e97cd | 28.4MB |
| registry.k8s.io/kube-scheduler              | v1.30.3            | sha256:d48f99 | 17.6MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-apiserver              | v1.30.3            | sha256:617731 | 29.9MB |
| docker.io/kicbase/echo-server               | functional-022013  | sha256:ce2d2c | 2.17MB |
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:014faa | 66.2MB |
| registry.k8s.io/kube-proxy                  | v1.30.3            | sha256:2351f5 | 25.6MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-022013 image ls --format table --alsologtostderr:
I0807 18:44:39.224021  485354 out.go:291] Setting OutFile to fd 1 ...
I0807 18:44:39.224205  485354 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:44:39.224217  485354 out.go:304] Setting ErrFile to fd 2...
I0807 18:44:39.224223  485354 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:44:39.224468  485354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
I0807 18:44:39.225077  485354 config.go:182] Loaded profile config "functional-022013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0807 18:44:39.225214  485354 config.go:182] Loaded profile config "functional-022013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0807 18:44:39.228095  485354 cli_runner.go:164] Run: docker container inspect functional-022013 --format={{.State.Status}}
I0807 18:44:39.250067  485354 ssh_runner.go:195] Run: systemctl --version
I0807 18:44:39.250129  485354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-022013
I0807 18:44:39.273075  485354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/functional-022013/id_rsa Username:docker}
I0807 18:44:39.378303  485354 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-022013 image ls --format json --alsologtostderr:
[{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b03426
48818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"66189079"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha25
6:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c"],"repoTags":["docker.io/library/nginx:latest"],"size":"67647629"},{"id":"sha256:8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"28374500"},{"id":"sha256:5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2","repoD
igests":["docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"33290438"},{"id":"sha256:d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"18253575"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-022013"],"size":"2173567"},{"id":"sha256:2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":["registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7
dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"25645955"},{"id":"sha256:d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"17641143"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:ef49b25866374be47ffbff449b45434d06a2916724f43214c3abb3722a929b52","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-022013"],"size":"992"},{"id":"sha256:61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"29942692"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-022013 image ls --format json --alsologtostderr:
I0807 18:44:38.927706  485268 out.go:291] Setting OutFile to fd 1 ...
I0807 18:44:38.927851  485268 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:44:38.927861  485268 out.go:304] Setting ErrFile to fd 2...
I0807 18:44:38.927865  485268 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:44:38.928087  485268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
I0807 18:44:38.928748  485268 config.go:182] Loaded profile config "functional-022013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0807 18:44:38.928876  485268 config.go:182] Loaded profile config "functional-022013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0807 18:44:38.929379  485268 cli_runner.go:164] Run: docker container inspect functional-022013 --format={{.State.Status}}
I0807 18:44:38.951603  485268 ssh_runner.go:195] Run: systemctl --version
I0807 18:44:38.951761  485268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-022013
I0807 18:44:38.991060  485268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/functional-022013/id_rsa Username:docker}
I0807 18:44:39.105468  485268 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-022013 image ls --format yaml --alsologtostderr:
- id: sha256:61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "29942692"
- id: sha256:8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "28374500"
- id: sha256:2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "25645955"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "17641143"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-022013
size: "2173567"
- id: sha256:ef49b25866374be47ffbff449b45434d06a2916724f43214c3abb3722a929b52
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-022013
size: "992"
- id: sha256:43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
repoTags:
- docker.io/library/nginx:latest
size: "67647629"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2
repoDigests:
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "33290438"
- id: sha256:d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
repoTags:
- docker.io/library/nginx:alpine
size: "18253575"
- id: sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "66189079"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-022013 image ls --format yaml --alsologtostderr:
I0807 18:44:38.653059  485200 out.go:291] Setting OutFile to fd 1 ...
I0807 18:44:38.653299  485200 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:44:38.653327  485200 out.go:304] Setting ErrFile to fd 2...
I0807 18:44:38.653348  485200 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:44:38.653666  485200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
I0807 18:44:38.654426  485200 config.go:182] Loaded profile config "functional-022013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0807 18:44:38.654642  485200 config.go:182] Loaded profile config "functional-022013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0807 18:44:38.655265  485200 cli_runner.go:164] Run: docker container inspect functional-022013 --format={{.State.Status}}
I0807 18:44:38.675301  485200 ssh_runner.go:195] Run: systemctl --version
I0807 18:44:38.675361  485200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-022013
I0807 18:44:38.698334  485200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/functional-022013/id_rsa Username:docker}
I0807 18:44:38.797354  485200 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-022013 ssh pgrep buildkitd: exit status 1 (340.422744ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image build -t localhost/my-image:functional-022013 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-022013 image build -t localhost/my-image:functional-022013 testdata/build --alsologtostderr: (2.626538861s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-022013 image build -t localhost/my-image:functional-022013 testdata/build --alsologtostderr:
I0807 18:44:39.250809  485360 out.go:291] Setting OutFile to fd 1 ...
I0807 18:44:39.251583  485360 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:44:39.251601  485360 out.go:304] Setting ErrFile to fd 2...
I0807 18:44:39.251607  485360 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:44:39.251946  485360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
I0807 18:44:39.252743  485360 config.go:182] Loaded profile config "functional-022013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0807 18:44:39.255070  485360 config.go:182] Loaded profile config "functional-022013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0807 18:44:39.255586  485360 cli_runner.go:164] Run: docker container inspect functional-022013 --format={{.State.Status}}
I0807 18:44:39.282527  485360 ssh_runner.go:195] Run: systemctl --version
I0807 18:44:39.282581  485360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-022013
I0807 18:44:39.303028  485360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/functional-022013/id_rsa Username:docker}
I0807 18:44:39.407239  485360 build_images.go:161] Building image from path: /tmp/build.1749517395.tar
I0807 18:44:39.407318  485360 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0807 18:44:39.425475  485360 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1749517395.tar
I0807 18:44:39.429472  485360 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1749517395.tar: stat -c "%s %y" /var/lib/minikube/build/build.1749517395.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1749517395.tar': No such file or directory
I0807 18:44:39.429505  485360 ssh_runner.go:362] scp /tmp/build.1749517395.tar --> /var/lib/minikube/build/build.1749517395.tar (3072 bytes)
I0807 18:44:39.469747  485360 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1749517395
I0807 18:44:39.479346  485360 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1749517395 -xf /var/lib/minikube/build/build.1749517395.tar
I0807 18:44:39.489789  485360 containerd.go:394] Building image: /var/lib/minikube/build/build.1749517395
I0807 18:44:39.489875  485360 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1749517395 --local dockerfile=/var/lib/minikube/build/build.1749517395 --output type=image,name=localhost/my-image:functional-022013
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:66964c5ef78738149efecdaa61dda738ef74b579927e8ce0ae8c4d84c6318692
#8 exporting manifest sha256:66964c5ef78738149efecdaa61dda738ef74b579927e8ce0ae8c4d84c6318692 0.0s done
#8 exporting config sha256:e2ed8778d92d20f2fd296fb78f5e24df71105526ed8e416efc15a6741bc00321 0.0s done
#8 naming to localhost/my-image:functional-022013 done
#8 DONE 0.1s
I0807 18:44:41.789763  485360 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1749517395 --local dockerfile=/var/lib/minikube/build/build.1749517395 --output type=image,name=localhost/my-image:functional-022013: (2.29985794s)
I0807 18:44:41.789840  485360 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1749517395
I0807 18:44:41.799400  485360 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1749517395.tar
I0807 18:44:41.809542  485360 build_images.go:217] Built localhost/my-image:functional-022013 from /tmp/build.1749517395.tar
I0807 18:44:41.809577  485360 build_images.go:133] succeeded building to: functional-022013
I0807 18:44:41.809583  485360 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-022013
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image load --daemon docker.io/kicbase/echo-server:functional-022013 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-022013 image load --daemon docker.io/kicbase/echo-server:functional-022013 --alsologtostderr: (1.528366103s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image load --daemon docker.io/kicbase/echo-server:functional-022013 --alsologtostderr
2024/08/07 18:44:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-022013
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image load --daemon docker.io/kicbase/echo-server:functional-022013 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-022013 image load --daemon docker.io/kicbase/echo-server:functional-022013 --alsologtostderr: (1.246886478s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image save docker.io/kicbase/echo-server:functional-022013 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image rm docker.io/kicbase/echo-server:functional-022013 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-022013
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-022013 image save --daemon docker.io/kicbase/echo-server:functional-022013 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-022013
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.78s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-022013
--- PASS: TestFunctional/delete_echo-server_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-022013
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-022013
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (124.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-917095 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0807 18:44:46.224170  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
E0807 18:45:27.184729  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-917095 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m3.446141914s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 status -v=7 --alsologtostderr
E0807 18:46:49.104986  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/StartCluster (124.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-917095 -- rollout status deployment/busybox: (3.21860286s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-p28v2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-swpdq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-tb68w -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-p28v2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-swpdq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-tb68w -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-p28v2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-swpdq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-tb68w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-p28v2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-p28v2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-swpdq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-swpdq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-tb68w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-917095 -- exec busybox-fc5497c4f-tb68w -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-917095 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-917095 -v=7 --alsologtostderr: (23.867406904s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-917095 status -v=7 --alsologtostderr: (1.061896428s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-917095 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp testdata/cp-test.txt ha-917095:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1035868586/001/cp-test_ha-917095.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095:/home/docker/cp-test.txt ha-917095-m02:/home/docker/cp-test_ha-917095_ha-917095-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m02 "sudo cat /home/docker/cp-test_ha-917095_ha-917095-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095:/home/docker/cp-test.txt ha-917095-m03:/home/docker/cp-test_ha-917095_ha-917095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m03 "sudo cat /home/docker/cp-test_ha-917095_ha-917095-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095:/home/docker/cp-test.txt ha-917095-m04:/home/docker/cp-test_ha-917095_ha-917095-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m04 "sudo cat /home/docker/cp-test_ha-917095_ha-917095-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp testdata/cp-test.txt ha-917095-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1035868586/001/cp-test_ha-917095-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095-m02:/home/docker/cp-test.txt ha-917095:/home/docker/cp-test_ha-917095-m02_ha-917095.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095 "sudo cat /home/docker/cp-test_ha-917095-m02_ha-917095.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095-m02:/home/docker/cp-test.txt ha-917095-m03:/home/docker/cp-test_ha-917095-m02_ha-917095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m03 "sudo cat /home/docker/cp-test_ha-917095-m02_ha-917095-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095-m02:/home/docker/cp-test.txt ha-917095-m04:/home/docker/cp-test_ha-917095-m02_ha-917095-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m04 "sudo cat /home/docker/cp-test_ha-917095-m02_ha-917095-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp testdata/cp-test.txt ha-917095-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1035868586/001/cp-test_ha-917095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095-m03:/home/docker/cp-test.txt ha-917095:/home/docker/cp-test_ha-917095-m03_ha-917095.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095 "sudo cat /home/docker/cp-test_ha-917095-m03_ha-917095.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095-m03:/home/docker/cp-test.txt ha-917095-m02:/home/docker/cp-test_ha-917095-m03_ha-917095-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m02 "sudo cat /home/docker/cp-test_ha-917095-m03_ha-917095-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095-m03:/home/docker/cp-test.txt ha-917095-m04:/home/docker/cp-test_ha-917095-m03_ha-917095-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m04 "sudo cat /home/docker/cp-test_ha-917095-m03_ha-917095-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp testdata/cp-test.txt ha-917095-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1035868586/001/cp-test_ha-917095-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095-m04:/home/docker/cp-test.txt ha-917095:/home/docker/cp-test_ha-917095-m04_ha-917095.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095 "sudo cat /home/docker/cp-test_ha-917095-m04_ha-917095.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095-m04:/home/docker/cp-test.txt ha-917095-m02:/home/docker/cp-test_ha-917095-m04_ha-917095-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m02 "sudo cat /home/docker/cp-test_ha-917095-m04_ha-917095-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 cp ha-917095-m04:/home/docker/cp-test.txt ha-917095-m03:/home/docker/cp-test_ha-917095-m04_ha-917095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 ssh -n ha-917095-m03 "sudo cat /home/docker/cp-test_ha-917095-m04_ha-917095-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-917095 node stop m02 -v=7 --alsologtostderr: (12.171760928s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-917095 status -v=7 --alsologtostderr: exit status 7 (760.900227ms)

                                                
                                                
-- stdout --
	ha-917095
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-917095-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-917095-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-917095-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:47:55.041602  501805 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:47:55.041918  501805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:47:55.041954  501805 out.go:304] Setting ErrFile to fd 2...
	I0807 18:47:55.041975  501805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:47:55.042274  501805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
	I0807 18:47:55.042550  501805 out.go:298] Setting JSON to false
	I0807 18:47:55.042682  501805 mustload.go:65] Loading cluster: ha-917095
	I0807 18:47:55.042770  501805 notify.go:220] Checking for updates...
	I0807 18:47:55.043330  501805 config.go:182] Loaded profile config "ha-917095": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0807 18:47:55.043417  501805 status.go:255] checking status of ha-917095 ...
	I0807 18:47:55.044077  501805 cli_runner.go:164] Run: docker container inspect ha-917095 --format={{.State.Status}}
	I0807 18:47:55.067491  501805 status.go:330] ha-917095 host status = "Running" (err=<nil>)
	I0807 18:47:55.067585  501805 host.go:66] Checking if "ha-917095" exists ...
	I0807 18:47:55.067967  501805 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-917095
	I0807 18:47:55.087904  501805 host.go:66] Checking if "ha-917095" exists ...
	I0807 18:47:55.088426  501805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:47:55.088548  501805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-917095
	I0807 18:47:55.118325  501805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/ha-917095/id_rsa Username:docker}
	I0807 18:47:55.218668  501805 ssh_runner.go:195] Run: systemctl --version
	I0807 18:47:55.223323  501805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:47:55.235182  501805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 18:47:55.291669  501805 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-07 18:47:55.281879109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 18:47:55.292311  501805 kubeconfig.go:125] found "ha-917095" server: "https://192.168.49.254:8443"
	I0807 18:47:55.292340  501805 api_server.go:166] Checking apiserver status ...
	I0807 18:47:55.292536  501805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:47:55.304884  501805 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1574/cgroup
	I0807 18:47:55.314796  501805 api_server.go:182] apiserver freezer: "8:freezer:/docker/17b288fb413480aff6c563bcd670d5d4d61551601658753165b10e6f444d38bd/kubepods/burstable/pod6009f6a140faaed90a99de886b3e477a/b72d158a2c94338644e7b2f5614ce3fa97bdfc8cf6d28a03a3052367f12cd31c"
	I0807 18:47:55.314924  501805 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/17b288fb413480aff6c563bcd670d5d4d61551601658753165b10e6f444d38bd/kubepods/burstable/pod6009f6a140faaed90a99de886b3e477a/b72d158a2c94338644e7b2f5614ce3fa97bdfc8cf6d28a03a3052367f12cd31c/freezer.state
	I0807 18:47:55.324289  501805 api_server.go:204] freezer state: "THAWED"
	I0807 18:47:55.324319  501805 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0807 18:47:55.332596  501805 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0807 18:47:55.332667  501805 status.go:422] ha-917095 apiserver status = Running (err=<nil>)
	I0807 18:47:55.332695  501805 status.go:257] ha-917095 status: &{Name:ha-917095 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:47:55.332726  501805 status.go:255] checking status of ha-917095-m02 ...
	I0807 18:47:55.333079  501805 cli_runner.go:164] Run: docker container inspect ha-917095-m02 --format={{.State.Status}}
	I0807 18:47:55.351876  501805 status.go:330] ha-917095-m02 host status = "Stopped" (err=<nil>)
	I0807 18:47:55.351895  501805 status.go:343] host is not running, skipping remaining checks
	I0807 18:47:55.351903  501805 status.go:257] ha-917095-m02 status: &{Name:ha-917095-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:47:55.351924  501805 status.go:255] checking status of ha-917095-m03 ...
	I0807 18:47:55.352330  501805 cli_runner.go:164] Run: docker container inspect ha-917095-m03 --format={{.State.Status}}
	I0807 18:47:55.369603  501805 status.go:330] ha-917095-m03 host status = "Running" (err=<nil>)
	I0807 18:47:55.369632  501805 host.go:66] Checking if "ha-917095-m03" exists ...
	I0807 18:47:55.369938  501805 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-917095-m03
	I0807 18:47:55.385905  501805 host.go:66] Checking if "ha-917095-m03" exists ...
	I0807 18:47:55.386197  501805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:47:55.386242  501805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-917095-m03
	I0807 18:47:55.403951  501805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/ha-917095-m03/id_rsa Username:docker}
	I0807 18:47:55.501885  501805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:47:55.521270  501805 kubeconfig.go:125] found "ha-917095" server: "https://192.168.49.254:8443"
	I0807 18:47:55.521298  501805 api_server.go:166] Checking apiserver status ...
	I0807 18:47:55.521370  501805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:47:55.532668  501805 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1447/cgroup
	I0807 18:47:55.544689  501805 api_server.go:182] apiserver freezer: "8:freezer:/docker/adecc2e13d86f3b8a88e8b5bec8ea704c88b72b05ea9f90b4c1b01db092836e8/kubepods/burstable/pod481894c7840aa0fe7cfd2d7965d5823d/f40dc9802e53e7e3652d894a66629ef44dc02720594cac4c6d994ca7763d993c"
	I0807 18:47:55.544770  501805 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/adecc2e13d86f3b8a88e8b5bec8ea704c88b72b05ea9f90b4c1b01db092836e8/kubepods/burstable/pod481894c7840aa0fe7cfd2d7965d5823d/f40dc9802e53e7e3652d894a66629ef44dc02720594cac4c6d994ca7763d993c/freezer.state
	I0807 18:47:55.553531  501805 api_server.go:204] freezer state: "THAWED"
	I0807 18:47:55.553563  501805 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0807 18:47:55.561361  501805 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0807 18:47:55.561388  501805 status.go:422] ha-917095-m03 apiserver status = Running (err=<nil>)
	I0807 18:47:55.561399  501805 status.go:257] ha-917095-m03 status: &{Name:ha-917095-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:47:55.561416  501805 status.go:255] checking status of ha-917095-m04 ...
	I0807 18:47:55.561721  501805 cli_runner.go:164] Run: docker container inspect ha-917095-m04 --format={{.State.Status}}
	I0807 18:47:55.586664  501805 status.go:330] ha-917095-m04 host status = "Running" (err=<nil>)
	I0807 18:47:55.586691  501805 host.go:66] Checking if "ha-917095-m04" exists ...
	I0807 18:47:55.587009  501805 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-917095-m04
	I0807 18:47:55.605016  501805 host.go:66] Checking if "ha-917095-m04" exists ...
	I0807 18:47:55.605322  501805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:47:55.605377  501805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-917095-m04
	I0807 18:47:55.622874  501805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/ha-917095-m04/id_rsa Username:docker}
	I0807 18:47:55.721705  501805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:47:55.733570  501805 status.go:257] ha-917095-m04 status: &{Name:ha-917095-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-917095 node start m02 -v=7 --alsologtostderr: (17.604882591s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-917095 status -v=7 --alsologtostderr: (1.050729083s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (132.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-917095 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-917095 -v=7 --alsologtostderr
E0807 18:48:34.880179  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:48:34.885505  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:48:34.895740  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:48:34.916175  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:48:34.956440  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:48:35.036959  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:48:35.197347  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:48:35.518218  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:48:36.158420  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:48:37.438828  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:48:39.999424  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:48:45.119620  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-917095 -v=7 --alsologtostderr: (37.29083356s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-917095 --wait=true -v=7 --alsologtostderr
E0807 18:48:55.360575  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:49:05.258821  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
E0807 18:49:15.840787  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:49:32.945657  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
E0807 18:49:56.801719  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-917095 --wait=true -v=7 --alsologtostderr: (1m35.029479545s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-917095
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (132.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-917095 node delete m03 -v=7 --alsologtostderr: (10.545172801s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-917095 stop -v=7 --alsologtostderr: (35.960278791s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-917095 status -v=7 --alsologtostderr: exit status 7 (109.485853ms)

                                                
                                                
-- stdout --
	ha-917095
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-917095-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-917095-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:51:16.539756  516219 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:51:16.539952  516219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:51:16.539983  516219 out.go:304] Setting ErrFile to fd 2...
	I0807 18:51:16.540001  516219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:51:16.540381  516219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
	I0807 18:51:16.540630  516219 out.go:298] Setting JSON to false
	I0807 18:51:16.540682  516219 mustload.go:65] Loading cluster: ha-917095
	I0807 18:51:16.541093  516219 notify.go:220] Checking for updates...
	I0807 18:51:16.541620  516219 config.go:182] Loaded profile config "ha-917095": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0807 18:51:16.542304  516219 status.go:255] checking status of ha-917095 ...
	I0807 18:51:16.543076  516219 cli_runner.go:164] Run: docker container inspect ha-917095 --format={{.State.Status}}
	I0807 18:51:16.560800  516219 status.go:330] ha-917095 host status = "Stopped" (err=<nil>)
	I0807 18:51:16.560820  516219 status.go:343] host is not running, skipping remaining checks
	I0807 18:51:16.560828  516219 status.go:257] ha-917095 status: &{Name:ha-917095 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:51:16.560861  516219 status.go:255] checking status of ha-917095-m02 ...
	I0807 18:51:16.561165  516219 cli_runner.go:164] Run: docker container inspect ha-917095-m02 --format={{.State.Status}}
	I0807 18:51:16.578246  516219 status.go:330] ha-917095-m02 host status = "Stopped" (err=<nil>)
	I0807 18:51:16.578275  516219 status.go:343] host is not running, skipping remaining checks
	I0807 18:51:16.578283  516219 status.go:257] ha-917095-m02 status: &{Name:ha-917095-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:51:16.578305  516219 status.go:255] checking status of ha-917095-m04 ...
	I0807 18:51:16.578658  516219 cli_runner.go:164] Run: docker container inspect ha-917095-m04 --format={{.State.Status}}
	I0807 18:51:16.597795  516219 status.go:330] ha-917095-m04 host status = "Stopped" (err=<nil>)
	I0807 18:51:16.597821  516219 status.go:343] host is not running, skipping remaining checks
	I0807 18:51:16.597829  516219 status.go:257] ha-917095-m04 status: &{Name:ha-917095-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (64.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-917095 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0807 18:51:18.722444  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-917095 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m3.796278098s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (64.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-917095 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-917095 --control-plane -v=7 --alsologtostderr: (41.437376346s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-917095 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-917095 status -v=7 --alsologtostderr: (1.035189768s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (61.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-013774 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0807 18:53:34.879833  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:54:02.563791  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 18:54:05.258220  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-013774 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m1.669469989s)
--- PASS: TestJSONOutput/start/Command (61.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-013774 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-013774 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-013774 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-013774 --output=json --user=testUser: (5.726963979s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-630251 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-630251 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.155388ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"298d9660-85d2-4e01-a60e-cb6cacf2f218","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-630251] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"49274e3c-deac-43f2-a1ff-8ee333427708","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19389"}}
	{"specversion":"1.0","id":"85e8f6ef-b40a-4ce9-b080-d4da1e04b481","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"04c8d2ef-c323-4409-babd-8cb7d605e566","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig"}}
	{"specversion":"1.0","id":"d31855c9-de23-4412-80f5-f4dbcb9e0618","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube"}}
	{"specversion":"1.0","id":"ff40573b-53d7-4ece-b9a4-1d1e23556cc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a70ed4e7-3150-45c1-9525-57cb9cba6793","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d096a72b-0d14-45bd-9b12-73217571a3be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-630251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-630251
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.13s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-655384 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-655384 --network=: (40.043135709s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-655384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-655384
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-655384: (2.066114046s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.13s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-902790 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-902790 --network=bridge: (33.109778658s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-902790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-902790
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-902790: (2.023161607s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.15s)

                                                
                                    
x
+
TestKicExistingNetwork (33.3s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-348947 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-348947 --network=existing-network: (31.069886527s)
helpers_test.go:175: Cleaning up "existing-network-348947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-348947
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-348947: (2.069979745s)
--- PASS: TestKicExistingNetwork (33.30s)

                                                
                                    
x
+
TestKicCustomSubnet (34.37s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-919563 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-919563 --subnet=192.168.60.0/24: (32.158484957s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-919563 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-919563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-919563
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-919563: (2.186545991s)
--- PASS: TestKicCustomSubnet (34.37s)

                                                
                                    
x
+
TestKicStaticIP (31.32s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-326918 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-326918 --static-ip=192.168.200.200: (29.09067532s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-326918 ip
helpers_test.go:175: Cleaning up "static-ip-326918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-326918
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-326918: (2.080731545s)
--- PASS: TestKicStaticIP (31.32s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (70.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-110747 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-110747 --driver=docker  --container-runtime=containerd: (33.472853346s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-113216 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-113216 --driver=docker  --container-runtime=containerd: (32.075208483s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-110747
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-113216
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-113216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-113216
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-113216: (1.993957465s)
helpers_test.go:175: Cleaning up "first-110747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-110747
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-110747: (1.977514347s)
--- PASS: TestMinikubeProfile (70.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-003381 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0807 18:58:34.880917  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-003381 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.195547298s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-003381 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-016607 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-016607 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.318953593s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-016607 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-003381 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-003381 --alsologtostderr -v=5: (1.598203269s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-016607 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-016607
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-016607: (1.200875622s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.83s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-016607
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-016607: (6.825940731s)
--- PASS: TestMountStart/serial/RestartStopped (7.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-016607 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (90.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-504882 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0807 18:59:05.258658  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
E0807 19:00:28.306278  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-504882 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m29.797586893s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (90.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-504882 -- rollout status deployment/busybox: (15.298951894s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- exec busybox-fc5497c4f-n8tjc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- exec busybox-fc5497c4f-rjvtm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- exec busybox-fc5497c4f-n8tjc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- exec busybox-fc5497c4f-rjvtm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- exec busybox-fc5497c4f-n8tjc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- exec busybox-fc5497c4f-rjvtm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- exec busybox-fc5497c4f-n8tjc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- exec busybox-fc5497c4f-n8tjc -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- exec busybox-fc5497c4f-rjvtm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-504882 -- exec busybox-fc5497c4f-rjvtm -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-504882 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-504882 -v 3 --alsologtostderr: (16.841577358s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.56s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-504882 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 cp testdata/cp-test.txt multinode-504882:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 cp multinode-504882:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3318065550/001/cp-test_multinode-504882.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 cp multinode-504882:/home/docker/cp-test.txt multinode-504882-m02:/home/docker/cp-test_multinode-504882_multinode-504882-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882-m02 "sudo cat /home/docker/cp-test_multinode-504882_multinode-504882-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 cp multinode-504882:/home/docker/cp-test.txt multinode-504882-m03:/home/docker/cp-test_multinode-504882_multinode-504882-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882-m03 "sudo cat /home/docker/cp-test_multinode-504882_multinode-504882-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 cp testdata/cp-test.txt multinode-504882-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 cp multinode-504882-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3318065550/001/cp-test_multinode-504882-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 cp multinode-504882-m02:/home/docker/cp-test.txt multinode-504882:/home/docker/cp-test_multinode-504882-m02_multinode-504882.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882 "sudo cat /home/docker/cp-test_multinode-504882-m02_multinode-504882.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 cp multinode-504882-m02:/home/docker/cp-test.txt multinode-504882-m03:/home/docker/cp-test_multinode-504882-m02_multinode-504882-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882-m03 "sudo cat /home/docker/cp-test_multinode-504882-m02_multinode-504882-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 cp testdata/cp-test.txt multinode-504882-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 cp multinode-504882-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3318065550/001/cp-test_multinode-504882-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 cp multinode-504882-m03:/home/docker/cp-test.txt multinode-504882:/home/docker/cp-test_multinode-504882-m03_multinode-504882.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882 "sudo cat /home/docker/cp-test_multinode-504882-m03_multinode-504882.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 cp multinode-504882-m03:/home/docker/cp-test.txt multinode-504882-m02:/home/docker/cp-test_multinode-504882-m03_multinode-504882-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 ssh -n multinode-504882-m02 "sudo cat /home/docker/cp-test_multinode-504882-m03_multinode-504882-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-504882 node stop m03: (1.210730249s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-504882 status: exit status 7 (512.459516ms)

                                                
                                                
-- stdout --
	multinode-504882
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-504882-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-504882-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-504882 status --alsologtostderr: exit status 7 (515.335841ms)

                                                
                                                
-- stdout --
	multinode-504882
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-504882-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-504882-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 19:01:19.037770  570268 out.go:291] Setting OutFile to fd 1 ...
	I0807 19:01:19.037936  570268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:01:19.037948  570268 out.go:304] Setting ErrFile to fd 2...
	I0807 19:01:19.037954  570268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:01:19.038222  570268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
	I0807 19:01:19.038430  570268 out.go:298] Setting JSON to false
	I0807 19:01:19.038465  570268 mustload.go:65] Loading cluster: multinode-504882
	I0807 19:01:19.038538  570268 notify.go:220] Checking for updates...
	I0807 19:01:19.038905  570268 config.go:182] Loaded profile config "multinode-504882": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0807 19:01:19.038923  570268 status.go:255] checking status of multinode-504882 ...
	I0807 19:01:19.039511  570268 cli_runner.go:164] Run: docker container inspect multinode-504882 --format={{.State.Status}}
	I0807 19:01:19.058711  570268 status.go:330] multinode-504882 host status = "Running" (err=<nil>)
	I0807 19:01:19.058743  570268 host.go:66] Checking if "multinode-504882" exists ...
	I0807 19:01:19.059040  570268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-504882
	I0807 19:01:19.076996  570268 host.go:66] Checking if "multinode-504882" exists ...
	I0807 19:01:19.077342  570268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 19:01:19.077395  570268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-504882
	I0807 19:01:19.103746  570268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33303 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/multinode-504882/id_rsa Username:docker}
	I0807 19:01:19.201425  570268 ssh_runner.go:195] Run: systemctl --version
	I0807 19:01:19.205724  570268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 19:01:19.217269  570268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 19:01:19.278801  570268 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-07 19:01:19.26031716 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 19:01:19.279396  570268 kubeconfig.go:125] found "multinode-504882" server: "https://192.168.58.2:8443"
	I0807 19:01:19.279421  570268 api_server.go:166] Checking apiserver status ...
	I0807 19:01:19.279466  570268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 19:01:19.291017  570268 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup
	I0807 19:01:19.300903  570268 api_server.go:182] apiserver freezer: "8:freezer:/docker/05cf88678259ab2f558aa0d4195f265144997910627abf33b5fc4be3ceedff6f/kubepods/burstable/pode79ef7b1c8a65af33e5e55ef4f7d7d99/5ce79768ce2b4494059e2b333bb53f3aad0af1640ad8eba8c098d4a262bb7b1c"
	I0807 19:01:19.300986  570268 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/05cf88678259ab2f558aa0d4195f265144997910627abf33b5fc4be3ceedff6f/kubepods/burstable/pode79ef7b1c8a65af33e5e55ef4f7d7d99/5ce79768ce2b4494059e2b333bb53f3aad0af1640ad8eba8c098d4a262bb7b1c/freezer.state
	I0807 19:01:19.309585  570268 api_server.go:204] freezer state: "THAWED"
	I0807 19:01:19.309616  570268 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0807 19:01:19.317255  570268 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0807 19:01:19.317281  570268 status.go:422] multinode-504882 apiserver status = Running (err=<nil>)
	I0807 19:01:19.317294  570268 status.go:257] multinode-504882 status: &{Name:multinode-504882 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 19:01:19.317340  570268 status.go:255] checking status of multinode-504882-m02 ...
	I0807 19:01:19.317656  570268 cli_runner.go:164] Run: docker container inspect multinode-504882-m02 --format={{.State.Status}}
	I0807 19:01:19.334507  570268 status.go:330] multinode-504882-m02 host status = "Running" (err=<nil>)
	I0807 19:01:19.334536  570268 host.go:66] Checking if "multinode-504882-m02" exists ...
	I0807 19:01:19.334842  570268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-504882-m02
	I0807 19:01:19.352142  570268 host.go:66] Checking if "multinode-504882-m02" exists ...
	I0807 19:01:19.352566  570268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 19:01:19.352620  570268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-504882-m02
	I0807 19:01:19.369174  570268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33308 SSHKeyPath:/home/jenkins/minikube-integration/19389-443116/.minikube/machines/multinode-504882-m02/id_rsa Username:docker}
	I0807 19:01:19.469758  570268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 19:01:19.482517  570268 status.go:257] multinode-504882-m02 status: &{Name:multinode-504882-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0807 19:01:19.482553  570268 status.go:255] checking status of multinode-504882-m03 ...
	I0807 19:01:19.482951  570268 cli_runner.go:164] Run: docker container inspect multinode-504882-m03 --format={{.State.Status}}
	I0807 19:01:19.499250  570268 status.go:330] multinode-504882-m03 host status = "Stopped" (err=<nil>)
	I0807 19:01:19.499274  570268 status.go:343] host is not running, skipping remaining checks
	I0807 19:01:19.499295  570268 status.go:257] multinode-504882-m03 status: &{Name:multinode-504882-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-504882 node start m03 -v=7 --alsologtostderr: (8.962603202s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (141.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-504882
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-504882
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-504882: (25.067459667s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-504882 --wait=true -v=8 --alsologtostderr
E0807 19:03:34.879836  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-504882 --wait=true -v=8 --alsologtostderr: (1m56.072663143s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-504882
--- PASS: TestMultiNode/serial/RestartKeepsNodes (141.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-504882 node delete m03: (5.304808465s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 stop
E0807 19:04:05.258279  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-504882 stop: (23.888107368s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-504882 status: exit status 7 (86.692403ms)

                                                
                                                
-- stdout --
	multinode-504882
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-504882-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-504882 status --alsologtostderr: exit status 7 (81.292311ms)

                                                
                                                
-- stdout --
	multinode-504882
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-504882-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 19:04:20.514801  578880 out.go:291] Setting OutFile to fd 1 ...
	I0807 19:04:20.514936  578880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:04:20.514946  578880 out.go:304] Setting ErrFile to fd 2...
	I0807 19:04:20.514951  578880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:04:20.515223  578880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
	I0807 19:04:20.515398  578880 out.go:298] Setting JSON to false
	I0807 19:04:20.515439  578880 mustload.go:65] Loading cluster: multinode-504882
	I0807 19:04:20.515492  578880 notify.go:220] Checking for updates...
	I0807 19:04:20.515837  578880 config.go:182] Loaded profile config "multinode-504882": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0807 19:04:20.515849  578880 status.go:255] checking status of multinode-504882 ...
	I0807 19:04:20.516328  578880 cli_runner.go:164] Run: docker container inspect multinode-504882 --format={{.State.Status}}
	I0807 19:04:20.536188  578880 status.go:330] multinode-504882 host status = "Stopped" (err=<nil>)
	I0807 19:04:20.536213  578880 status.go:343] host is not running, skipping remaining checks
	I0807 19:04:20.536221  578880 status.go:257] multinode-504882 status: &{Name:multinode-504882 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 19:04:20.536247  578880 status.go:255] checking status of multinode-504882-m02 ...
	I0807 19:04:20.536620  578880 cli_runner.go:164] Run: docker container inspect multinode-504882-m02 --format={{.State.Status}}
	I0807 19:04:20.552259  578880 status.go:330] multinode-504882-m02 host status = "Stopped" (err=<nil>)
	I0807 19:04:20.552283  578880 status.go:343] host is not running, skipping remaining checks
	I0807 19:04:20.552291  578880 status.go:257] multinode-504882-m02 status: &{Name:multinode-504882-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-504882 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0807 19:04:57.924402  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-504882 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.415714483s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-504882 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-504882
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-504882-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-504882-m02 --driver=docker  --container-runtime=containerd: exit status 14 (75.405555ms)

                                                
                                                
-- stdout --
	* [multinode-504882-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-504882-m02' is duplicated with machine name 'multinode-504882-m02' in profile 'multinode-504882'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-504882-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-504882-m03 --driver=docker  --container-runtime=containerd: (32.195665306s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-504882
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-504882: exit status 80 (474.636814ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-504882 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-504882-m03 already exists in multinode-504882-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-504882-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-504882-m03: (1.966920492s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.76s)

                                                
                                    
x
+
TestPreload (117.26s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-836602 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-836602 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m12.345235854s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-836602 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-836602 image pull gcr.io/k8s-minikube/busybox: (1.277753306s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-836602
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-836602: (12.081726542s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-836602 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-836602 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (28.994426682s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-836602 image list
helpers_test.go:175: Cleaning up "test-preload-836602" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-836602
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-836602: (2.320694658s)
--- PASS: TestPreload (117.26s)

                                                
                                    
x
+
TestScheduledStopUnix (108.82s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-400401 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-400401 --memory=2048 --driver=docker  --container-runtime=containerd: (33.182396549s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-400401 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-400401 -n scheduled-stop-400401
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-400401 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-400401 --cancel-scheduled
E0807 19:08:34.879867  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-400401 -n scheduled-stop-400401
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-400401
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-400401 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0807 19:09:05.257887  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-400401
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-400401: exit status 7 (70.287586ms)

                                                
                                                
-- stdout --
	scheduled-stop-400401
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-400401 -n scheduled-stop-400401
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-400401 -n scheduled-stop-400401: exit status 7 (69.316342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-400401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-400401
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-400401: (4.084211229s)
--- PASS: TestScheduledStopUnix (108.82s)

                                                
                                    
x
+
TestInsufficientStorage (10.62s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-927421 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-927421 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.101359918s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e9bb424a-eb0c-4671-81e1-9481f9542746","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-927421] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f65afa3d-514b-4519-be82-c7a55542057f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19389"}}
	{"specversion":"1.0","id":"b164defb-1fc3-45b2-89a3-32d9ebbb29e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"31494391-14f2-4afe-a702-51b18a715587","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig"}}
	{"specversion":"1.0","id":"2912992f-0571-4977-9523-4a448a89d00c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube"}}
	{"specversion":"1.0","id":"c484cf36-052c-46d4-80bc-4e5603d8db59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3cef6e43-bedc-42c2-8d84-bc53a41ade96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6f9836e2-7f61-4575-bdde-de9f7fb4700b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"43664b86-b81b-447c-ba4d-75fcb1d87e5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"555910de-2daf-4ec6-b8d2-69bcc81da92b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c6dbff3d-1ed3-407d-a9db-40d5302387e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"43181c26-d24d-4d89-8df6-fa4250778361","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-927421\" primary control-plane node in \"insufficient-storage-927421\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b25a352-88bd-436d-bfac-1940495821c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723026928-19389 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"94959cf7-650c-4925-a320-103abd1b4b16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"14c6b720-1c3e-455c-be99-a0cc9bba6eb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-927421 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-927421 --output=json --layout=cluster: exit status 7 (288.734899ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-927421","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-927421","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0807 19:09:45.879815  597577 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-927421" does not appear in /home/jenkins/minikube-integration/19389-443116/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-927421 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-927421 --output=json --layout=cluster: exit status 7 (289.625715ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-927421","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-927421","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0807 19:09:46.171828  597640 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-927421" does not appear in /home/jenkins/minikube-integration/19389-443116/kubeconfig
	E0807 19:09:46.181889  597640 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/insufficient-storage-927421/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-927421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-927421
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-927421: (1.937811638s)
--- PASS: TestInsufficientStorage (10.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (92.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1998452582 start -p running-upgrade-334576 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1998452582 start -p running-upgrade-334576 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.329890488s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-334576 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0807 19:17:08.309121  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-334576 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.310287509s)
helpers_test.go:175: Cleaning up "running-upgrade-334576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-334576
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-334576: (3.353121791s)
--- PASS: TestRunningBinaryUpgrade (92.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (362.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-493892 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-493892 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.18544279s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-493892
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-493892: (1.239637748s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-493892 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-493892 status --format={{.Host}}: exit status 7 (65.203472ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-493892 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-493892 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m41.398434267s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-493892 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-493892 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-493892 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (114.239678ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-493892] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-493892
	    minikube start -p kubernetes-upgrade-493892 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4938922 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-493892 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-493892 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-493892 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (10.62338936s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-493892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-493892
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-493892: (2.620277676s)
--- PASS: TestKubernetesUpgrade (362.42s)

                                                
                                    
x
+
TestMissingContainerUpgrade (176s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3570432513 start -p missing-upgrade-641385 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3570432513 start -p missing-upgrade-641385 --memory=2200 --driver=docker  --container-runtime=containerd: (1m29.649951907s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-641385
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-641385: (10.329795001s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-641385
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-641385 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0807 19:13:34.891712  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-641385 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m11.853145141s)
helpers_test.go:175: Cleaning up "missing-upgrade-641385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-641385
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-641385: (2.317023162s)
--- PASS: TestMissingContainerUpgrade (176.00s)

                                                
                                    
x
+
TestPause/serial/Start (83.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-500078 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-500078 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m23.669060494s)
--- PASS: TestPause/serial/Start (83.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-036728 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-036728 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (94.389574ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-036728] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-036728 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-036728 --driver=docker  --container-runtime=containerd: (43.589297599s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-036728 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-036728 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-036728 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.350670777s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-036728 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-036728 status -o json: exit status 2 (308.721693ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-036728","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-036728
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-036728: (2.030173144s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-036728 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-036728 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.991100397s)
--- PASS: TestNoKubernetes/serial/Start (6.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-036728 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-036728 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.362575ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-036728
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-036728: (1.219421637s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-036728 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-036728 --driver=docker  --container-runtime=containerd: (6.857597556s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-036728 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-036728 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.919499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-500078 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-500078 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.270254386s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.29s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-500078 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-500078 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-500078 --output=json --layout=cluster: exit status 2 (429.433932ms)

                                                
                                                
-- stdout --
	{"Name":"pause-500078","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-500078","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-500078 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.18s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-500078 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-500078 --alsologtostderr -v=5: (1.178585897s)
--- PASS: TestPause/serial/PauseAgain (1.18s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-500078 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-500078 --alsologtostderr -v=5: (3.975438581s)
--- PASS: TestPause/serial/DeletePaused (3.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-500078
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-500078: exit status 1 (22.785313ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-500078: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (128.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1090965780 start -p stopped-upgrade-867265 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0807 19:14:05.258048  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1090965780 start -p stopped-upgrade-867265 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.118765313s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1090965780 -p stopped-upgrade-867265 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1090965780 -p stopped-upgrade-867265 stop: (19.99814864s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-867265 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-867265 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m0.514106857s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (128.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-867265
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-867265: (1.26005119s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-378386 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-378386 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (230.566614ms)

                                                
                                                
-- stdout --
	* [false-378386] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 19:17:44.355088  637242 out.go:291] Setting OutFile to fd 1 ...
	I0807 19:17:44.355315  637242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:17:44.355329  637242 out.go:304] Setting ErrFile to fd 2...
	I0807 19:17:44.355345  637242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:17:44.355626  637242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-443116/.minikube/bin
	I0807 19:17:44.356095  637242 out.go:298] Setting JSON to false
	I0807 19:17:44.357133  637242 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10816,"bootTime":1723047449,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0807 19:17:44.357215  637242 start.go:139] virtualization:  
	I0807 19:17:44.360845  637242 out.go:177] * [false-378386] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0807 19:17:44.363276  637242 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 19:17:44.363417  637242 notify.go:220] Checking for updates...
	I0807 19:17:44.367647  637242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 19:17:44.370426  637242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-443116/kubeconfig
	I0807 19:17:44.372323  637242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-443116/.minikube
	I0807 19:17:44.374186  637242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0807 19:17:44.375953  637242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 19:17:44.378743  637242 config.go:182] Loaded profile config "force-systemd-flag-727876": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0807 19:17:44.378903  637242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 19:17:44.404042  637242 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0807 19:17:44.404172  637242 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0807 19:17:44.506040  637242 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-07 19:17:44.493415936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0807 19:17:44.506161  637242 docker.go:307] overlay module found
	I0807 19:17:44.508184  637242 out.go:177] * Using the docker driver based on user configuration
	I0807 19:17:44.510128  637242 start.go:297] selected driver: docker
	I0807 19:17:44.510146  637242 start.go:901] validating driver "docker" against <nil>
	I0807 19:17:44.510160  637242 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 19:17:44.512864  637242 out.go:177] 
	W0807 19:17:44.514481  637242 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0807 19:17:44.516704  637242 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-378386 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-378386

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-378386

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-378386

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-378386

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-378386

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-378386

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-378386

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-378386

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-378386

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-378386

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-378386

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-378386" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-378386" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-378386

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378386"

                                                
                                                
----------------------- debugLogs end: false-378386 [took: 4.180337137s] --------------------------------
helpers_test.go:175: Cleaning up "false-378386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-378386
--- PASS: TestNetworkPlugins/group/false (4.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (142.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-145103 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0807 19:21:37.925424  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-145103 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m22.599076834s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (142.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-145103 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [065edcbb-e33c-4fe3-ae37-87fe207df0ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [065edcbb-e33c-4fe3-ae37-87fe207df0ad] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004489393s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-145103 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-145103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-145103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.368216284s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-145103 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-145103 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-145103 --alsologtostderr -v=3: (13.063189234s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-708131 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-708131 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0: (1m7.307914517s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-145103 -n old-k8s-version-145103
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-145103 -n old-k8s-version-145103: exit status 7 (93.516528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-145103 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-708131 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e0188dda-0df7-43dc-b93b-27e6ed1e75b2] Pending
helpers_test.go:344: "busybox" [e0188dda-0df7-43dc-b93b-27e6ed1e75b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e0188dda-0df7-43dc-b93b-27e6ed1e75b2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003753026s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-708131 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-708131 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-708131 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.037937341s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-708131 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-708131 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-708131 --alsologtostderr -v=3: (12.08633837s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-708131 -n no-preload-708131
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-708131 -n no-preload-708131: exit status 7 (69.042194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-708131 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-708131 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0
E0807 19:23:34.879538  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 19:24:05.257904  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-708131 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0: (4m26.525876221s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-708131 -n no-preload-708131
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rd9tb" [f5245a4b-b22a-4ce4-8f34-b7a6ad9df6b9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004555097s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rd9tb" [f5245a4b-b22a-4ce4-8f34-b7a6ad9df6b9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003644404s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-708131 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-708131 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-708131 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-708131 -n no-preload-708131
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-708131 -n no-preload-708131: exit status 2 (343.962791ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-708131 -n no-preload-708131
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-708131 -n no-preload-708131: exit status 2 (349.851116ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-708131 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-708131 -n no-preload-708131
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-708131 -n no-preload-708131
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-313116 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-313116 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (1m11.415608158s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-6swd7" [bd1e7050-f5dc-482f-9e57-b3ff5d710b68] Running
E0807 19:28:34.879822  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004852151s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-6swd7" [bd1e7050-f5dc-482f-9e57-b3ff5d710b68] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005137008s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-145103 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-145103 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-145103 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-145103 --alsologtostderr -v=1: (1.176615938s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-145103 -n old-k8s-version-145103
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-145103 -n old-k8s-version-145103: exit status 2 (467.626572ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-145103 -n old-k8s-version-145103
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-145103 -n old-k8s-version-145103: exit status 2 (412.104452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-145103 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-145103 -n old-k8s-version-145103
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-145103 -n old-k8s-version-145103
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-480254 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
E0807 19:29:05.258511  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-480254 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (1m8.100919521s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-313116 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3332d575-d5e1-4a4d-b7cb-5fa89be20ea8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3332d575-d5e1-4a4d-b7cb-5fa89be20ea8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.004570861s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-313116 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-313116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-313116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.157211128s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-313116 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-313116 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-313116 --alsologtostderr -v=3: (12.314633358s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-313116 -n embed-certs-313116
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-313116 -n embed-certs-313116: exit status 7 (66.381728ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-313116 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-313116 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-313116 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (4m27.302628997s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-313116 -n embed-certs-313116
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-480254 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [40fec926-e029-43aa-9da0-341b37db449f] Pending
helpers_test.go:344: "busybox" [40fec926-e029-43aa-9da0-341b37db449f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [40fec926-e029-43aa-9da0-341b37db449f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.003819641s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-480254 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-480254 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-480254 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028651839s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-480254 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-480254 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-480254 --alsologtostderr -v=3: (12.253782484s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-480254 -n default-k8s-diff-port-480254
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-480254 -n default-k8s-diff-port-480254: exit status 7 (98.100887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-480254 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-480254 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
E0807 19:31:45.658956  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:31:45.664918  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:31:45.675216  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:31:45.695569  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:31:45.735902  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:31:45.816243  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:31:45.976613  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:31:46.297001  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:31:46.937662  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:31:48.218467  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:31:50.778946  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:31:55.900103  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:32:06.141101  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:32:26.621980  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:33:05.261759  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
E0807 19:33:05.267082  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
E0807 19:33:05.277401  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
E0807 19:33:05.297732  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
E0807 19:33:05.338036  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
E0807 19:33:05.418291  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
E0807 19:33:05.578645  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
E0807 19:33:05.899167  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
E0807 19:33:06.539519  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
E0807 19:33:07.582332  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
E0807 19:33:07.820678  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
E0807 19:33:10.381826  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
E0807 19:33:15.502702  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
E0807 19:33:25.743590  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
E0807 19:33:34.879563  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
E0807 19:33:46.223862  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
E0807 19:33:48.310055  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
E0807 19:34:05.258444  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-480254 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (4m27.98617093s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-480254 -n default-k8s-diff-port-480254
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-8nztv" [d508333a-1178-4a5d-874d-c245569eac24] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003793055s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-8nztv" [d508333a-1178-4a5d-874d-c245569eac24] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00341183s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-313116 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-313116 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-313116 --alsologtostderr -v=1
E0807 19:34:27.184543  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-313116 -n embed-certs-313116
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-313116 -n embed-certs-313116: exit status 2 (341.737383ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-313116 -n embed-certs-313116
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-313116 -n embed-certs-313116: exit status 2 (331.134638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-313116 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-313116 -n embed-certs-313116
E0807 19:34:29.503054  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-313116 -n embed-certs-313116
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-632796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-632796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0: (42.926747166s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fdplm" [e1b7e08f-d903-4946-b4b3-0f8c166a7a17] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004659633s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fdplm" [e1b7e08f-d903-4946-b4b3-0f8c166a7a17] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004302208s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-480254 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-480254 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-480254 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-480254 --alsologtostderr -v=1: (1.036525794s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-480254 -n default-k8s-diff-port-480254
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-480254 -n default-k8s-diff-port-480254: exit status 2 (385.444113ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-480254 -n default-k8s-diff-port-480254
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-480254 -n default-k8s-diff-port-480254: exit status 2 (337.248063ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-480254 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-480254 --alsologtostderr -v=1: (1.434008247s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-480254 -n default-k8s-diff-port-480254
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-480254 -n default-k8s-diff-port-480254
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-378386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-378386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m11.224068662s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-632796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-632796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.41353901s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-632796 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-632796 --alsologtostderr -v=3: (1.391082255s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-632796 -n newest-cni-632796
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-632796 -n newest-cni-632796: exit status 7 (100.163147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-632796 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-632796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-632796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0: (22.239604877s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-632796 -n newest-cni-632796
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-632796 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-632796 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-632796 --alsologtostderr -v=1: (1.465401318s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-632796 -n newest-cni-632796
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-632796 -n newest-cni-632796: exit status 2 (360.773066ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-632796 -n newest-cni-632796
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-632796 -n newest-cni-632796: exit status 2 (488.8117ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-632796 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-632796 --alsologtostderr -v=1: (1.059577007s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-632796 -n newest-cni-632796
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-632796 -n newest-cni-632796
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.61s)
E0807 19:41:17.778359  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/auto-378386/client.crt: no such file or directory
E0807 19:41:17.783701  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/auto-378386/client.crt: no such file or directory
E0807 19:41:17.794001  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/auto-378386/client.crt: no such file or directory
E0807 19:41:17.814330  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/auto-378386/client.crt: no such file or directory
E0807 19:41:17.854680  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/auto-378386/client.crt: no such file or directory
E0807 19:41:17.935029  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/auto-378386/client.crt: no such file or directory
E0807 19:41:18.095597  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/auto-378386/client.crt: no such file or directory
E0807 19:41:18.416253  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/auto-378386/client.crt: no such file or directory
E0807 19:41:19.057091  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/auto-378386/client.crt: no such file or directory
E0807 19:41:19.148302  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
E0807 19:41:20.337448  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/auto-378386/client.crt: no such file or directory
E0807 19:41:22.897747  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/auto-378386/client.crt: no such file or directory
E0807 19:41:28.018778  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/auto-378386/client.crt: no such file or directory
E0807 19:41:38.259706  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/auto-378386/client.crt: no such file or directory
E0807 19:41:45.659038  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-378386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-378386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m10.965148363s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-378386 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-378386 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5vhqd" [f494c0a6-b1d6-4b02-88de-62cfe3156a09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5vhqd" [f494c0a6-b1d6-4b02-88de-62cfe3156a09] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005636371s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-378386 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-378386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-378386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-378386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-378386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m24.847750513s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-fdqtj" [b1b7e638-5924-417f-9c4d-ae0f887ff5c2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003871496s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-378386 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-378386 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cd6kx" [7ba2fdcf-9f76-48d2-b246-c73f60caf908] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0807 19:37:13.344077  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/old-k8s-version-145103/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-cd6kx" [7ba2fdcf-9f76-48d2-b246-c73f60caf908] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004349593s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-378386 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-378386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-378386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-378386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0807 19:38:05.261871  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-378386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m3.566886766s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vwhms" [ca51c16f-4e98-4ade-83b8-7b80a045876e] Running
E0807 19:38:17.926708  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004732712s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-378386 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-378386 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-z2zjt" [775d98f2-84a1-4b40-80ea-a6e03ce3e0af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-z2zjt" [775d98f2-84a1-4b40-80ea-a6e03ce3e0af] Running
E0807 19:38:32.945161  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/no-preload-708131/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004181343s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-378386 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-378386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0807 19:38:34.879204  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/functional-022013/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-378386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-378386 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-378386 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5482k" [838940f2-51c6-4899-837a-0fd5cdc38c41] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5482k" [838940f2-51c6-4899-837a-0fd5cdc38c41] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00373813s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-378386 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-378386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-378386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (97.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-378386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0807 19:39:05.258672  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/addons-553671/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-378386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m37.114155916s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (97.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-378386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0807 19:39:57.225685  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
E0807 19:39:57.230925  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
E0807 19:39:57.241196  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
E0807 19:39:57.261583  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
E0807 19:39:57.301833  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
E0807 19:39:57.382103  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
E0807 19:39:57.542359  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
E0807 19:39:57.862871  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
E0807 19:39:58.503506  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
E0807 19:39:59.784258  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
E0807 19:40:02.344971  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
E0807 19:40:07.466033  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
E0807 19:40:17.706888  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-378386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m3.104896578s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-x2x4m" [0b9d076c-da08-4385-a7e4-f69f3e120d44] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00391077s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-378386 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-378386 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nhvgb" [2339d050-be4a-44fa-bc4d-077d056a623c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-nhvgb" [2339d050-be4a-44fa-bc4d-077d056a623c] Running
E0807 19:40:38.187569  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/default-k8s-diff-port-480254/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004383772s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-378386 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-378386 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-77d78" [7cb07a56-1155-43e5-8943-08a062ea51d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-77d78" [7cb07a56-1155-43e5-8943-08a062ea51d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004801822s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-378386 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-378386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-378386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-378386 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-378386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-378386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-378386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-378386 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (48.630028143s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-378386 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-378386 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nptmk" [fdc30746-aac0-4ad9-8d29-5f7f432aa7a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0807 19:41:58.739976  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/auto-378386/client.crt: no such file or directory
E0807 19:42:00.959372  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/kindnet-378386/client.crt: no such file or directory
E0807 19:42:00.964595  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/kindnet-378386/client.crt: no such file or directory
E0807 19:42:00.974854  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/kindnet-378386/client.crt: no such file or directory
E0807 19:42:00.995101  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/kindnet-378386/client.crt: no such file or directory
E0807 19:42:01.035910  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/kindnet-378386/client.crt: no such file or directory
E0807 19:42:01.116183  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/kindnet-378386/client.crt: no such file or directory
E0807 19:42:01.276606  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/kindnet-378386/client.crt: no such file or directory
E0807 19:42:01.597111  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/kindnet-378386/client.crt: no such file or directory
E0807 19:42:02.237883  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/kindnet-378386/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-nptmk" [fdc30746-aac0-4ad9-8d29-5f7f432aa7a8] Running
E0807 19:42:03.518123  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/kindnet-378386/client.crt: no such file or directory
E0807 19:42:06.078656  448488 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-443116/.minikube/profiles/kindnet-378386/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00433545s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-378386 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-378386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-378386 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (31/336)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-768978 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-768978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-768978
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-901531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-901531
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-378386 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-378386

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-378386

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-378386

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-378386

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-378386

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-378386

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-378386

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-378386

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-378386

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-378386

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-378386

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-378386" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-378386" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-378386

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378386"

                                                
                                                
----------------------- debugLogs end: kubenet-378386 [took: 4.376868159s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-378386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-378386
--- SKIP: TestNetworkPlugins/group/kubenet (4.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-378386 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-378386" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-378386

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-378386" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378386"

                                                
                                                
----------------------- debugLogs end: cilium-378386 [took: 4.711789646s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-378386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-378386
--- SKIP: TestNetworkPlugins/group/cilium (4.92s)

                                                
                                    
Copied to clipboard